Simple, fast, embedded opsys design

For stand-alone microprocessors

Simple, fast, embedded opsys design

Postby Doug Coulter » Sat Jul 17, 2010 3:06 pm

Since I will soon be posting a number of embedded projects and their code, I thought it would be a good idea to explain the opsys structure I almost always use for embedded projects. Experienced embedded coders will probably notice that this is a lot like the windows 3.1 "cooperative multitasking" model, which didn't work all that well for them. But for YOU it works great, having control over both the opsys and all the applications it runs. I will try to show this in C like pseudo code, with explanations, and give some examples later in this thread.

Basically, after calling the ubiquitous init() function, main falls into a loop "checking for things to do" and doing that mostly by calling apps, which are all state machines, implemented with a switch (my state) construct, in which the zero state is always "look for something to do, or return right away".

So main looks like this:

while (1)
{
statemachine1();
statemachine2();
statemachine(3);
// and so on
}



Each app, or state machine looks like this:

statemachine1()
{
switch (mystate)
case 0: // look for something to do
///do whatever you do to see if there's something to do
if (work to do) mystate = 1;
return;
break;
case 1: // do that work
/// do the work
if (work is done) mystate = 0;
break;

// if work would be long, break it up into a get ready state, a do one chunk state, and a done state, and do those below as more cases
} // end switch and state machine


Now, this only describes one thread of execution, and when trying to jam realtime multitasking into a system, you need or want more threads. In my designs, the other threads tend to be in ISR's.
We describe some global variables above main for these to use to communicate with main() and the various state machines, and may do some processing in the ISR's themselves, but in general, not much at all, as in general you want ISR's to go fast and return quickly. In general again, we might have an ISR that handles a timer and provides a timing service, or one that continues to put out rs232 once it's started in the background without further attention from the main loop. Other IO that has to be fast is also done this way.

For example:

ISR Timer_ISR
{
clear timer interrupt(); // in whatever language, you often have to clear the flag someplace to indicate you've handled it
for (i=0;i<NUM_ONE_SHOTS, i++) if (one_shots[i]) oneshots[i]--; // timer service for background
incrementrealtimeclock(); // some routine that keeps time of day for the background
}

The idea here is that there are some number of defined "one shots" that work closely like a hardware one does. The background code can set one of them to some non-zero number, and each hit on this timer routine will decrement them until they are zero. The background state machine can then look at the one it set, and switch to another state if the delay it requested has been satisfied. And in this example, the time of day is just kept for everyone and put into some global structure everything can read from. The usual caveats apply here. If the time struct can change during a read of it by the background, the background code has to disable the timer interrupt before reading the current time, then re-enable it quickly. This prevents it from getting partial updates when the time is rolling over during the time it takes the background to read the structure.

For things like rs232 io, we may define some global buffers and flags (for example, rs232_in_use). The background would call something like send232(mybuffer); which would send the first byte, set the rs232_in_use flag (and maybe check it first and wait in polling till it's clear), then enable rs232 interrupts so an ISR for that could send out the rest of the bytes, then clean the rs232_in_use flag so other background routines can use rs 232 without interfering with one another. This means that the routines using rs232 should really just check and wait till the flag is cleared before calling the initiate routine, because they can do that in the state machine and return immediatly (not wasting too much time in case there's other work that can be done) until it is ready for them to use.

This may seem like a long description -- but considering I've just described a working multithreaded opsys and all you need to create one in a microprocessor -- not so bad, eh? The idea is that there is some polling going on, but each state machine or app looks just once, and then returns to main() so that all can poll as needed with very little time delay -- one or two C statements worth. If they have nothing to do or the resource they need is in use, they just return until there's work or a needed resource available. This can easily handle some pretty complex situations, where several real time data streams are being moved and processed at a time, from the user's point of view. Here, we almost always use this plan, and even in a tiny PIC microprocessor can easily meet some fairly stiff realtime requirements, like doing rs232 both ways at 115k baud and taking enough a/d samples and counter counts to keep that channel completely full -- even with some fairly good pre processing on input and output. In fact, it's more or less impossible to do on a big, fast PC what we do regularly in PICs unless the PC hardware has all this fancy buffering so things don't get lost during the opsys normal "dead time when it's off on some demented errand of it's own it doesn't show using cycles, but it does use them and create long pauses in user programs running".

Hopefully, this will help explain some much more detailed code examples that will soon follow.
To recap -- main(); just calls an init() function, then falls into a loop calling all these state machines that are like user apps. Each app has a "look for work" state that returns immediately if there's nothing for it to do, or does a chunk of work and returns, and manages its own state, setting it back to "look for work" when appropriate. ISR's handle realtime things and get/set flags for the rest. That's it!
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Simple, fast, embedded opsys design

Postby dnnrly » Tue Sep 21, 2010 10:54 am

I take it that there is maximum of 1 thread per IO resource on that OS, or only 1 ISR thread per processor. I also assume locking primitives are strictly verbotten.
dnnrly
 
Posts: 3
Joined: Tue Sep 21, 2010 5:47 am
Location: Nottingham, UK

Re: Simple, fast, embedded opsys design

Postby Doug Coulter » Tue Sep 21, 2010 1:10 pm

In a PIC, interrupts can interrupt one another if you set it up that way, and are obviously careful about how you handle data that more than one thread uses.
It depends a little on what you mean by "thread". In general, there's the background loop, which calls state machines -- time slicing or cooperative multitasking.
Those can "wait" by just sitting in a state that checks a variable and returns if it's not ready, or change state if it is, so the next time round they do something about that variable
having become true. Yes, no one is ever allowed to just sit in their own spin loop, ever -- the background loop is a for(1){} which calls all this in turn, it's the thing that does the spinning.

Typically one of the ISRs will be running off some timer, and implement some oneshots, which are just an array that gets counted down once per tick, if not zero already. So an "app" can set one to a number, then go into a state that looks for it to reach zero, then proceed. But it only checks once per main loop roundy round. A slightly fancier version of this we did in a DSP also had something a lot like windows messages maintained by the background loop. In fact, all this is frighteningly like the win 3.1 cooperative multitasking way of doing things. It works better, because it's small and one guy can do it all and presumably a great mind runs in the same track.... ;)

We don't do things like mutexs and semaphores. We find just having a bit someplace that is checked does that fine -- if your stuff ain't ready, return to the main loop, you'll get called again to check real soon. Again, every app in the background loop does one simple thing or check (depending on the state at the time) then returns pronto. If it has a lot of work to do, it makes it into chunks (one state gets ready, the next processes one chunk) and then returns, so other tasks get cycles in a timely way always.

Remember, if you have the whole design in your head, you can almost always avoid the need for those. Two background apps can't run at the same time -- so no need, they can just set and check flags to each other (in C for pics, a bool is one bit). ISR's in general buffer any data, and set a flag the background then checks, and maybe clears when it's done with the data. The main loop roundy round time is kept so short, this works great for hard realtime stuff. We've even used a pic 18f6520 to do audio rate autocorrelations and pitch tracking (different algo I have a patent on) for a body worn prosthetic for deaf infants -- while doing all that and displaying on 10 3-color leds (formants and pitch) it also creates a pulse to a tactile piezo that vibrates on the skin with hard and software, arbitrary pulse shapes to 120v (and it runs off two nimh aa cells all day long doing that). That chip is on the large side for a pic -- and we were able to do nearly all of it in C there, only a couple things wound up in asm.

Now consider a much smaller chip, just enough to do some job, no room for waste at all -- that's where this thing shines. Say 4k code and 1k ram space, total, plus the on chip hardware (which tends to be nice and help a lot). That's what you wind up doing if you're programming or designing for a big manufacturer, as we did. IF you can make the code tighter, they can then use a smaller chip, and everyone saves a ton of bucks. No room for waste there, they don't hire that kind of programmer at all for that work. They tend to want to use everything a chip has doing something useful to the product, any unused anything is an avoidable cost. Very different from writing apps for desktops.

Remember, this works in machines that have almost no memory...for either code or data. For a big machine where you've got resources to waste, you can use a more full-blown opsys like UCos or something fancier. My mass spectrometer actually has win CE in the box -- that wasteful, and it does nothing but create a DCOM activeX, the software in the host computer does ALL the work in "realtime" (not! -- it brings a respectable computer to its knees with crappy .NET code) -- at that level of waste, I'm almost ashamed to even own the thing.

At any rate, the sort of thing that needs to "yield" to the "opsys" here is done manually in every state of every app, simply by returning, perhaps having changed its state variable first, so it will do something else on the next call -- which will be soon. Most things only do a few lines worth of C per call to them, so they get back to the opsys and therefore the next app, quick.
Sort of threading, but not, and it's all manual, in the app code itself. The "not threading" aspect makes the need for a cycle and resource hungry opsys with semaphores etc just not needed at all. Simple is good!
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Simple, fast, embedded opsys design

Postby dnnrly » Wed Sep 22, 2010 5:42 am

Oh my god - it's like PLC programming with C! Well, at least it's not ladder logic.
dnnrly
 
Posts: 3
Joined: Tue Sep 21, 2010 5:47 am
Location: Nottingham, UK

Re: Simple, fast, embedded opsys design

Postby Doug Coulter » Wed Sep 22, 2010 7:54 am

The PLC programming I've seen so far is kind of a drag-drop pictorial affair that maybe gets 10% of the CPU doing something useful -- at best.
They tend to hit time crunch some of the time, and spin uselessly the rest, and can't really be counted on for hard realtime deadlines -- and don't timestamp their data so it's not so obvious how bad they usually are. But then, the fact that some of them use x86-class cpus that could run a desktop to do a simple PID loop for a heater control tells you all that another way.

No ladder logic here, that's far too inflexible. I'll post up some finished projects code so you can see. It's really not hard to get your mind around, I must be doing a lousy job describing it.

At this point, we use this plan for a lot of things, and part of that is because we have written and debugged a comm protocol with the PC, some libraries on the PC side to use that efficiently there for debugging and real use, and so forth, along with hP hardware drivers for a ton of things you might want to hook up -- from LCD displays with a rotary encoder-driven menuing system, to exotic a/d, flash memory, and other hardware, to using pic pins for things they were never intended to do. So I start almost any project with all the "hello world" and my style of debugging already working. For RT stuff, a debugger that halts the process is usually fairly useless -- the type that comes with all the IDEs. When you hit a break point and stop, everything realtime overflows, drops data, and sets flags saying so, which makes it impossible to find out what was really wrong -- so we stream printf type stuff all the time during runs to see what's happening when. Heck, we even use spare HW pins to flag when we are in say, this thread or that app by turning them on and off at the start/end of the various routines so we can get a real picture of how it's all working together. Scopes and leds can be a lot faster than a printf, which tends to be a memory and cycle hog in these small chips, and turning that debug on and off via a #DEFINE can be enough to make or break poorly written code.

Heck, we think it's major that we even have C on a machine that doesn't have enough ram for a call stack, or the hardware in the CPU to make one. We used to do this pure-asm in versions that didn't even have a conditional jump -- they had a conditional skip that could skip a jump instead (which means you had to think in reverse on the condition tests). No malloc is used even now for machines that want to have a lot of 9's reliability -- all that is laid out in advance by the programmer explicitly, because a memory leak in a tiny machine will bring it down quick.
Many of the products we designed around these had no reset or power switch, and a re-boot meant a service call to the customer -- that level of reliability was required for say, a fire safety system, a paging system or inhouse PBX, or just about anything in avionics, or production robots.

Yes, the requirements are stringent. If you're doing it for love anyway -- do you really want anything less than the very best that is possible? This is a lot less about making it easy for the programmer, but more useful to the user (in very limited hardware) than PC apps are. Most of what we've done and will do -- if you become aware that there's a computer in there, we failed. We want stuff "that just works". Like the PIC in your remote, the other one in the TV, the bread machine, the light dimmer, stoplights, your phone, the watchdog in some high reliability PC hardware, the security system in a public place, your car subsystems and so forth. There are more PIC's in the world than all other processors combined right now. Probably not more MIPs worth, but more CPUs for sure. A lot of this is the system-on-chip aspect of these, the onboard hardware is so good it makes the rest of a product very easy and cheap to do -- very few other parts needed for a lot of jobs.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Simple, fast, embedded opsys design -- boilerplate

Postby Doug Coulter » Sun Oct 31, 2010 2:03 pm

I'm currently in the process of converting our embedded opsys, written for the HiTech compiler, to the CCS compiler, and along the way, I have this setup, attached. I've not yet put in the good rs232 stuff that is interrupt driven and blindingly fast compared to the native CCS stuff (blocking code, no buffers) but handy anyway. This version allows you to use the CCS printf() to put things out of the serial port (if you don't mind waiting around in blocking code) and use putc and getc types of things (if you don't let their one-two byte buffer overflow -- their implementation saves ram & rom, but stinks otherwise).

It would be nice to have it all, but they don't reveal how they do their "streams" in this thing, and even the listing files don't contain the actual code they are using, sigh. Since my compiler is out of date (eg support for me has lapsed) I don't know if I'll go to the trouble to reverse engineer that or not -- I'll probably just convert to the new system and feed it strings from sprintf() and do my own input parsing using our LOP protocol going both ways. I'll put that up here too when it's ready.

But here you have some nice setup info for much of the hardware (a/d, pwm, timers, RS232) for a pic18lf2523 (nicely cheap chip) for use with internal oscillator and a watch crystal for accurate time of day timing and a "blank" main loop to add apps to. Think of it as a quick-start for using this chip, or with slight mods, most 18 series pics. We call this sort of thing boilerplate around here -- just add your content, all the hair pulling setup stuff is done, just crank out the applications code now.

picexamp.zip
Project files for a PIC project
(31.63 KiB) Downloaded 340 times


In deference to our computer challenged friends (eg who don't run linux), here is the project directory in .zip format most should be able to read.

The project that got this going is a solar controller for my lab. I now have excess capacity to say the least (I just had to pull the switch on the roof panels to prevent serious battery overcharge) so I need one. The other advantage is that with some smarts here, you can gain on the order of 30% more power from the same panels.

The reason for that is panels are a constant current (kind of) source, with current proportional to photon count. The voltage is more or less limited by the fact that the "diodes" in a panel are forward biased in operation. To make sure panels put out enough even when hot (so the diode drops are less) they use more in series than always are needed. So in "normal" weather, the peak power voltage of a nominal 24 volt panel is more like 34-36 volts, and only drops to about 30v at tropical-in-the-sun, burn-you temperatures.

So, one can make an efficient buck switching supply that instead of regulating output voltage, regulates the input voltage down to the max power point, which will vary a bit with temperature and light levels (there is always some series R somewhere). In the best case for this think mid winter and batteries way down -- 22 volts, with a 36volt panel max power point (and about 45 panel volts open circuit). In that case, with a 100% efficient switcher, you can get 1.63 battery amps for every panel amps, just when you need it the most!

With the pic in charge of the switcher, you can do moderately smart things to make the batteries live longer, like taper the charge rate near full charge, and drop to a float voltage once the bank is full up, to just maintain the charge. Since the PIC knows what it's doing, this could also signal that spare power is being thrown away (just not accepted) so flip a relay to drive some diversionary low priority load, in my case that will be a water distillation rig for my chemistry lab, and perhaps a space heater. In fact, the pic can know just how much power there is that is falling on the ground, and activate the *correct* diversion load for that much power if it has a choice of a few.

PICs are fun. All the pins come up floating, so every time someone complains that some transient or weird condition blew their hardware -- it was the hardware guy's fault for not providing pullups or pulldowns to put the thing in a safe state till the software can come up, or otherwise having the hardware be safe in the absence of pic control -- in other words, too lazy to do it right without a processor anyway. Since pic outputs are in the 30ma range, these can be fairly low values of R (like 1k), and the pic easily override them when it wants to. The only NMI kind of thing on a pic is "reset", boot time is in microseconds and it can know why it booted -- reset, power up, watchdog timer and so on -- and take the correct actions for that. If it doesn't, then the software guy was lazy. This is still a heck of a lot easier than doing this job with discretes, particularly when fairly hairy state machines are involved.

After all, the sun comes and goes -- what state were we in before? Do we have to go to full charge, or should we resume "float"? What is the current instantaneous max power voltage from the panels (have to search for that or use a temp sensor out there). Along with, can we handle this much current? Easy to sometimes get double rated power out of panels when the sun and clouds are just right, and double sizing (at least) all the rest gets expensive on a multi KW system -- and less efficient due to increased switching losses of bigger devices, or just more of them.

More on all this later, when the project is done, I'll put it up in the alt energy section. Yes, you can buy something like what I'm making off the shelf these days, but it seems a lot to pay ~$600 for a low voltage to low voltage DC-DC supply, and no one makes one that will handle my array -- at best you need many of them, all trying to stay in sync, not so great in actual practice.

FWIW, here's what my dev environment looks like at the moment. I'm running windows in a window, in virtual box under linux. I plan to move this over to pure linux at some point, but this works, easy and fast. And most of the things wrong with windows go away if it's in a sandbox and not allowed on the internet much.
Screenshot-1.png


For what it's worth, this gets around the main loop at 630 khz as measured by a scope on pin 10 of the pic, which means it's really twice that (it just toggles the line once per roundy round, which means two roundies per cycle of sq wave). That's except for when it hits that printf -- where it slows town to sub khz speeds till the characters are all out. So that's going to get fixed stat! Can't have odd 5 millisecond delays in something that can otherwise do Mhz response times -- a 5000:1 level of stupidity the way it is. Could this be why people think hard realtime programming is hard?
We do want to use the cycles up, or we'd sell the customer a cheaper chip! But not waste them in spin loops; that's just dumb.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Simple, fast, embedded opsys design

Postby Doug Coulter » Sat Nov 27, 2010 4:41 pm

As a result of some demand for this for things other than the solar project, the boilerplate opsys is getting some improvements. I have figured out how to eliminate the need for *any* crystals if your seconds timing can be off a few percent, or you are willing to tune a number on a chip by chip basis, though it still woudln't be as good as a wristwatch - if you want that, you need a watch crystal, and I've made it so simply commenting a #define in and out controls that. Not using the crystal frees up two I/O pins for other uses as well. This reduces the basic PIC hardware to a 5v supply and bypass capacitor, and an rs-232 converter chip if you want that. Can't do a whole lot better than that!

I plan to use the xtal when possible, as the couple percent error you get otherwise makes time-synced logging a little more difficult, and it can be important when teasing cause and effect apart later on from the logs. I am supposing it would be not-stupid to allow for one master PIC device to drive the other ones, so no matter what the PC logger does, they will all report time correctly for that reason. Any good engineer knows that even with precision stuff, clocks will drift apart otherwise.

I have posted a PCB layout on the thread in vacuum tech on TPH-055 controller. Anyone who wants the actual files and tools, let me know, and I'll email you a ton of stuff with actual board examples -- it's a bit large to post here as an attachment without a lot of things being left out I think anyone using these tools should have. There are numerous existing projects in forms that can be sent to a PCB house, which are valuable examples of the basic how-to in that package, but that makes it large.

I am now working on an interrupt driven version of RS-232 I/O, which will be a pure ascii implementation, using either \r or \n (or both if from stupid windows machines) as input terminators, and the usual null terminator at the end of a string for sending (and if you want to send \r\n you can, but the null won't be sent). our LOP scheme can be added later on, but for a lot of users, being able to have everything going over the wire be human readable in a dumb terminal (or emulator) will be important, so this goes first, though it's a little slower and won't do binary (which is...slower if you have to send binary as hex -- takes twice the bits, and worse if converted to decimal in the pic and so on). I'll post the new boilerplate opsys as soon as I've finished and checked it to my satisfaction here. The interrupt driven version will negate the use of printf() in the user code, instead you'll have to use sprintf() and pass the string to the driver -- not so bad. Their printf() is a polled, blocking call, which will mess more people up who don't understand realtime programming more than this will.

I have found over some decades of software development, that it's important to have the tools really-really ready, and sweet, before putting the application specific software on top, so what I'm trying to do, even though it puts off the start of the "real stuff" is get this as reusable as I possibly can so we can all use it for whatever projects come to mind -- in other words, make the best possible jumpstart for new projects to be easy to do. If possible, I may try to implement some of what we did in LOP, which allowed the PC to do things like peek and poke memory, change program ROM and EEPROM and so on, but we will see when that happens -- I don't need those for debugging anymore myself, so it'll be somewhat driven by people asking. Usually, one can implement this sort of thing as pure ASCII as well as in LOP -- it's just slower, and takes more code and more ram. In this case, for all but a super-tweaker project, the chip I'm basing this on (with only minor changes needed to use any 18f series PIC) has plenty of everything*, so it may not become an issue.

At least, this will keep my skills in this area from lapsing. Use it or lose it!

*if the designer has any brains that is. If you need bulk storage or any heavy-lifting math -- that's what PC's are good at. Do that there, not in a PIC. PICs are for where instant response and hard deadlines are needed -- things a PC can't do. Use the tool that fits each job.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Simple, fast, embedded opsys design

Postby William A Washburn » Sat Nov 27, 2010 9:04 pm

Didn't know you wrote C but I'm impressed. I use C# (and really love it) for code-behind on my ASPX pages...Congrats...Bill Washburn
User avatar
William A Washburn
 
Posts: 93
Joined: Fri Oct 15, 2010 8:12 am

Re: Simple, fast, embedded opsys design

Postby Doug Coulter » Sat Nov 27, 2010 9:47 pm

Oh, I write in any number of languages, but I dropped MS about the time .NET came out, for various reasons (I did love DevStudio right up till then though, and it's prominently featured in my book, Digital Audio Processing -- best thing MS ever did, for real and they actually listened to my bug reports. And a bunch of magazine articles I wrote as well.). I've done pro software development for money since about 1970 or so....so, ASM for gosh knows how many processors, including ones I had to wire wrap, after designing the hardware too (before microprocessors), old main frames, IBM and DEC, Ti DSP's, Z80's and up, 68ks, rabbits, PICS etc -- too many to recall easily now. C, C++ (MFC or not but I actually liked MFC once I learned it), fortran, perl, shell and so on. Managed to skip COBOL, APL and so on. Not to boast -- you do something that many years, you either get good at it or you feel pretty stupid and should have changed jobs long since.

Once you learn a couple languages, it's just a matter of how do I do the same stuff in this new one, and what libraries are there to help (which is usually more work than learning the language itself). What's funny is I stink at HTML, and various web programming things (XML parsing, yuck) -- I'm more of an embedded on-the-metal guy, and do PC apps mainly do deal with embedded stuff at the other end of some comm link. Of course, no adult admits to doing BASIC ;) ....and I'm just now beginning to get facile with SQL as I need it for my physics data logging. All that programing is why I can now type so doggone fast (as many might suspect -- I type as fast as a used car salesman can talk and am absolute "heck" on keyboards, wear out a few a year). I'd probably be famous for this, but a lot of my work was either for the "intelligence community" or customers who'd just as soon keep the credit to themselves. No skin off my back -- they paid nicely, and on time too, no whining from me about that. When you use a Cel phone, there's a good chance your voice is going through my code at several points in the chain (I did some of the codecs and some of the call setup protocols, the best of which sadly, aren't in use outside of LAN phone systems made by Valcom), which is why I could retire fairly young. As well as voice to text that worked well enough to cut the need for transcriptionists in the medical business, inventing a language that was HTML-like to recreate insurance forms that even a receptionist could learn quick -- total monkey drag-drop stuff that worked with voice filling in the form entries faster than realtime (recordings from disk processed very quickly from the doctors recorders at end of day).

Personally, were I going to write in an interpreted language, it'd be perl, which has a virtual machine tailored to the language, and is tons faster than any of the others (and it works cross platform better than any .net stuff). I abhor JAVA, and now Oracle gives me even better reasons to. However, coming from either strongly typed languages (C or Fortran) or almost completely untyped (ASM) I kinda feel like I should check my palms for hair after doing a lot of perl... The first time I added a numeric 1.0 to a textual 1999, and it worked, I kinda felt a little dirty...

Since the host gave me a choice, this site runs on Linux, for what that's worth. No issues with hacking it in years (ever).

One thing that really struck me hard a few years back. Some guy I was in a car with was complaining about how his computer didn't do what he wanted, and for maybe the first time I realized how powerless most people must feel about that -- I'd just have spent a few minutes writing a program (and did, for him). I had all this power and control most people don't without even realizing it, as computers went from a few rich outfits and government to mainstream, and I'd missed the implications of that almost entirely. It's only recently I've used PC's as a user, rather than as a tool to write code for other things...it feels kinda weird, actually.

Funny thing is, take a year or two off, and a lot of that goes away, and I'm back to checking books for syntax and precedence things -- my looking in books to typing ratio is way up now, though I still write what I think of as pretty good code (else I'd write it differently, duh) -- as simple as it can be, but no simpler. I haven't lost the "how to design a system" yet, though.
That one seems innate, something I try to teach, but it's mostly somehow in the genes it seems. How to model complex interactions in one's head and all that.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Simple, fast, embedded opsys design

Postby Doug Coulter » Sun Nov 28, 2010 4:09 pm

OK, here's a new "release" of the basic PIC 18fxxxx opsys, with some serious improvements (see changelog.txt in the zip file for detail).

This release no longer needs any external timing parts at all (but you can use them by changing a #define if you want)

I've now added interrupt driven rs232 ascii both ways to this, which is a 5k::1 speed improvement on sending if used, and which eliminates character by character polling for input as well.
Blocking code ain't for me! And it makes doing things in real time very difficult. You can still use the blocking printf() call if you want, and are careful, but...this is far better.

The protocol for sending is to call SendPC() with a buffer pointer to what you want to send (null terminated string). You can check a busy flag first if you're a good programmer, otherwise it will, and it will stall while busy -- there are always trade-offs, and in this small space, you can't make it stupid-proof. You can of course avoid that by the intrinsic timing in your app code as well, so no waiting need ever actually be done. Any non-null characters will go out.

For receiving, it groups input into lines/messages with one of the terminators \r, \n, or \0 -- any, or all, work, and it will ignore any after the first one it sees (many ignorant windows programs send all three). This buffers input, sends it to a "safe" (some new meaning of the word "safe" for you HHGTTG fans) buffer for the background, and sets a flag that background loop can check. I have a very minimal input parser going that just sends any input back out for now -- TODO is add some code that does things based on what is input, sets, gets, commands and such. Some will be the same for every project (the debug stuff) and some will be app-specific of course. The background has to "eat" the buffered input before the next message is complete, or it will be wiped out, but it can overlap parsing the last input with a new message coming in, the buffer copy to background doesn't happen until the new message terminator. There are always trade-offs in this sort of thing.

One is, isn't so good for maximally efficient binary comm (no nulls), but I can also add that with a scheme I've thought of, should it be required.
For now, it works well with any terminal emulator on a PC, including the one with the CCS toolset, if you know to send a hex "a" (or other terminator) at the end of a message.
Hyperterminal works as well as it ever does, in windows.

Another trade-off is the size of the comm buffers -- whatever you set (via a #define) there will be 4 of. The current ram usage is about 17% due to this -- only 5% is other variables. Rom is now 8% or so.
I am using 64 chars per buffer now, hopefully it's enough (and for a slow PC, you should cut to 16 or you can overrun it easily). Remember, this is one of the "big ram" chips, so it has a whole 1536 bytes of ram. You learn to use single bits for booleans and things like that, and use the PC to store anything big...right tool for the job and all that.

I added a message output in the init() function that tells you why it booted up this time. You can boot from power up, a reset, a watchdog timer timeout, and various other errors.
I did this for less skilled programmers who are real likely to forget what "cooperative multitasking" is all about and get into some spin loop -- the watchdog will time out and give the appropriate "you're doing something dumb" message in that case. And oh yes, the watchdog is now turned on, but currently set to "stupidly slow". In most real apps you'd want to set it quicker. Further, depending on the reason for boot, you may or may not want to go through various init steps, as they'd be already done (and you wouldn't want to wipe out some variables in that case) -- it's up the the user/programmer.

Various other improvements and additions, this is nearly ready for real release as the basic boilerplate for all future apps on this chip family. Mods to other 18f chips will be trivial in software, and in fact, I'm using a hardware PCB layout developed many years before this chip family even existed (2002) -- PICs are nice like that, they make transitions easy -- almost any chip the same number of pins will at least work at all in the same PCB layout, though you may not be able to use the new features without a little work (for example, ethernet or USB might need repurposing of those I/O pins in hardware with the extra stuff to connect to the world). Further, they keep making the real old stuff available so manufacturers don't have to do expensive "NRE" work all the time -- for these, it really is Non Recurring Engineering.

Enjoy! Probably the next release will be the last one that isn't application specific, this will be the basis of a lot of embedded designs around this board. Hard part done, just add application code at this point, or after I add some debug parsing to help those who need that (it's always nice to be able to peek and poke, set eeprom setting and such, and it'll be easy to add, but it's feet-up time for now).

The hope here is that by spending the extra effort up front, we save time on these things from now on.
lf2523opsys.zip
10.11 release of 18f2523 opsys, improved
(65.54 KiB) Downloaded 329 times
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA


Return to Embedded software

Who is online

Users browsing this forum: No registered users and 1 guest

cron