Since I will soon be posting a number of embedded projects and their code, I thought it would be a good idea to explain the opsys structure I almost always use for embedded projects. Experienced embedded coders will probably notice that this is a lot like the windows 3.1 "cooperative multitasking" model, which didn't work all that well for them. But for YOU it works great, having control over both the opsys and all the applications it runs. I will try to show this in C like pseudo code, with explanations, and give some examples later in this thread.
Basically, after calling the ubiquitous init() function, main falls into a loop "checking for things to do" and doing that mostly by calling apps, which are all state machines, implemented with a switch (my state) construct, in which the zero state is always "look for something to do, or return right away".
So main looks like this:
while (1)
{
statemachine1();
statemachine2();
statemachine(3);
// and so on
}
Each app, or state machine looks like this:
statemachine1()
{
switch (mystate)
case 0: // look for something to do
///do whatever you do to see if there's something to do
if (work to do) mystate = 1;
return;
break;
case 1: // do that work
/// do the work
if (work is done) mystate = 0;
break;
// if work would be long, break it up into a get ready state, a do one chunk state, and a done state, and do those below as more cases
} // end switch and state machine
Now, this only describes one thread of execution, and when trying to jam realtime multitasking into a system, you need or want more threads. In my designs, the other threads tend to be in ISR's.
We describe some global variables above main for these to use to communicate with main() and the various state machines, and may do some processing in the ISR's themselves, but in general, not much at all, as in general you want ISR's to go fast and return quickly. In general again, we might have an ISR that handles a timer and provides a timing service, or one that continues to put out rs232 once it's started in the background without further attention from the main loop. Other IO that has to be fast is also done this way.
For example:
ISR Timer_ISR
{
clear timer interrupt(); // in whatever language, you often have to clear the flag someplace to indicate you've handled it
for (i=0;i<NUM_ONE_SHOTS, i++) if (one_shots[i]) oneshots[i]--; // timer service for background
incrementrealtimeclock(); // some routine that keeps time of day for the background
}
The idea here is that there are some number of defined "one shots" that work closely like a hardware one does. The background code can set one of them to some non-zero number, and each hit on this timer routine will decrement them until they are zero. The background state machine can then look at the one it set, and switch to another state if the delay it requested has been satisfied. And in this example, the time of day is just kept for everyone and put into some global structure everything can read from. The usual caveats apply here. If the time struct can change during a read of it by the background, the background code has to disable the timer interrupt before reading the current time, then re-enable it quickly. This prevents it from getting partial updates when the time is rolling over during the time it takes the background to read the structure.
For things like rs232 io, we may define some global buffers and flags (for example, rs232_in_use). The background would call something like send232(mybuffer); which would send the first byte, set the rs232_in_use flag (and maybe check it first and wait in polling till it's clear), then enable rs232 interrupts so an ISR for that could send out the rest of the bytes, then clean the rs232_in_use flag so other background routines can use rs 232 without interfering with one another. This means that the routines using rs232 should really just check and wait till the flag is cleared before calling the initiate routine, because they can do that in the state machine and return immediatly (not wasting too much time in case there's other work that can be done) until it is ready for them to use.
This may seem like a long description -- but considering I've just described a working multithreaded opsys and all you need to create one in a microprocessor -- not so bad, eh? The idea is that there is some polling going on, but each state machine or app looks just once, and then returns to main() so that all can poll as needed with very little time delay -- one or two C statements worth. If they have nothing to do or the resource they need is in use, they just return until there's work or a needed resource available. This can easily handle some pretty complex situations, where several real time data streams are being moved and processed at a time, from the user's point of view. Here, we almost always use this plan, and even in a tiny PIC microprocessor can easily meet some fairly stiff realtime requirements, like doing rs232 both ways at 115k baud and taking enough a/d samples and counter counts to keep that channel completely full -- even with some fairly good pre processing on input and output. In fact, it's more or less impossible to do on a big, fast PC what we do regularly in PICs unless the PC hardware has all this fancy buffering so things don't get lost during the opsys normal "dead time when it's off on some demented errand of it's own it doesn't show using cycles, but it does use them and create long pauses in user programs running".
Hopefully, this will help explain some much more detailed code examples that will soon follow.
To recap -- main(); just calls an init() function, then falls into a loop calling all these state machines that are like user apps. Each app has a "look for work" state that returns immediately if there's nothing for it to do, or does a chunk of work and returns, and manages its own state, setting it back to "look for work" when appropriate. ISR's handle realtime things and get/set flags for the rest. That's it!