Learning PIC microcontrollers and C programming

For stand-alone microprocessors

Learning PIC microcontrollers and C programming

Postby Joe Jarski » Sat Jun 18, 2011 12:38 am

I'm starting out here with about zero knowledge of microcontrollers and programming. And I have about a years worth of tinkering with electronics at this point - not an electronics whiz by any means. So, this thread is going to follow my progress on learning all of this stuff and serve as a place to ask newbie questions. Hopefully it'll help some other people along that are in the same boat and maybe remove some of the magic that happens inside these chips. I've passed over several processor based projects in the past, even when the code was provided, because I didn't know how to get the code into the processor or how to make any sense out of the gibberish that the source code was written in to fix it if it didn't "just work".

There's a staggering number of MCUs available from different manufacturers - over 300 just in the PIC18 series - it's overwhelming if you don't have any prior knowledge and have some idea where to start. Since I didn't know, I got some recommendations from Doug on where to begin. As for the MCUs, he recommended that I go with PIC18F series of processors and also recommended the CCS compiler as one of the better ones for the PIC chips. By using similar stuff to his, that'll give me a chance to reuse some of the code that he's developed for data acquisition and other tasks. After a little more input I settled on starting out with the PIC18F45K22 Development Kit which includes the compiler, the programmer, development board and an exercise book. I haven't received the kit yet, but it's on the way.

The other part of this is learning C programming to go along with it - another area that I have virtually zero knowledge. Once again Doug pointed me in the direction of what is known as the K&R book, "The C programming Language" by Brian Kernighan and Dennis Ritchie. I've already received the K&R book and have started doing some of the exercises on a Unix workstation that I have using vi to write the code and gcc for the compiler. I would guess Linux would probably be a more likely place to start for most people using gedit to write the code and the gcc compiler - I used Unix because I had it available. As for windows, I'm not sure if there are any free compilers available for download, but I don't think they're "built in" like they are on Linux/Unix.

I'm still at the most basic level right now, but the structure and syntax of the source code is starting to make a little sense as I write the programs in the exercises and run them... then usually fix them and run them again and eventually get it right.

It's gonna be a long journey, but I'll never get there without taking this first step.
User avatar
Joe Jarski
 
Posts: 231
Joined: Thu Sep 16, 2010 8:37 pm
Location: SouthEast Michigan

Re: Learning PIC microcontrollers and C programming

Postby Starfire » Sat Jun 18, 2011 5:33 pm

Joe - would it help if the fundamentals were explained and a simple program to show the basics ? I work mostly with assembler (machine code) and it is necessary to understand their structure before learning a high level language.
Starfire
 
Posts: 143
Joined: Thu Aug 05, 2010 4:26 pm
Location: North Ireland

Re: Learning PIC microcontrollers and C programming

Postby Joe Jarski » Sat Jun 18, 2011 6:50 pm

John - Yes, my goal (as usual) is to really get a good understanding and not just blindly copy what others have done. So, I'm open to any suggestions that anyone may have. I don't want to get too scattered early on, but I think something like that would be helpful in this case.
User avatar
Joe Jarski
 
Posts: 231
Joined: Thu Sep 16, 2010 8:37 pm
Location: SouthEast Michigan

Re: Learning PIC microcontrollers and C programming

Postby William A Washburn » Sun Jun 19, 2011 9:58 am

Joe,

I have done C, C++ & C# programming for years and have read many good books on these subjects.

Depending on what flavor of C you will be using I'd suggest going to Amazon and purchasing a
GOOD (read the reviews) book on that particular version of the language. I'm partial to C#
since it is (for microsoft users) tightly bound to their "Dot Net" object ad-ins and is used almost
everywhere you look today.

If you need to program down on the metal, as they say, C++ is the most complete of the versions.
C# is the least complete but easiest to learn of the flavors. Most of all, however, get a good
book (preferably one with a CD) and begin doing the examples. With a little work you will start
to see the patterns in the language and things will become much easier.

Good luck, Bill
User avatar
William A Washburn
 
Posts: 93
Joined: Fri Oct 15, 2010 8:12 am

Re: Learning PIC microcontrollers and C programming

Postby Doug Coulter » Mon Jun 20, 2011 11:33 am

William - in this case the toolchain is chosen. There is no C++ or C# (in particular!) for an 8 bit CPU with no hardware data stack and no windows opsys. Please let us know when either becomes available for the platform under discussion -- it should be fun to see how they squeeze 100's of MB of libraries for either of those into the PIC's 32k rom....and 2k ram (tops!).

I suggested K&R for starters, as what we deal with in micro-land is hugely simpler than what you deal with in a PC -- there are no libraries to learn unless you write them first, no opsys hooks as you're writing that, and so on. K&R covers all we've got here nicely, short and sweet. Most of the books in the bookstore are far too into how to use windows libraries (whether MFC or the C# assemblies), draw on the screen (we have no screen until we put one on there), ad interact with the opsys's in general -- we don't have one of those either other than the very simple foreground/background one I wrote that only has a few calls for utilities (like timing, keeping track of time, getting a bunch of background code called in order, and setting up whatever hardware is either in the pic or stuck onto it). It's a very different world.

There are really (at least) two things important to discuss here. One really isn't in books at all other than kinda glossed over in the compiler manual. That's how for example the C Runtime gets in there, and how all the configuration bits (called fuses in the CCS book) get setup, how memory is allocated (very complex in the CCS scheme so that ram is reused for dynamic variables and a fake stack is created), and of course, how their little hardware driver junk plays -- things like whether it automatically turns port bits between in and out or analog for every instruction that talks to a port, or leaves it up to you to do manually (which saves space and cycles). Nothing whatsoever to do with C as a language, but all those non-ANSII extensions they had to put in to make it run standalone on a PIC (or any other uP). This is so terribly covered by any book, and at that, changes with every new rev of the toolchain, that we thought it would be good to cover that in detail here with all the real world issues you always have getting that first hello world going in a PIC vs on a pc where 99% of the code is "magic between you and the machine".

The second of course is C, which is pretty simple, but we'll be going way past just C syntax here (see K&R, it's all there), and on to "how to design a system, how to handle threads, how to structure code for maximal reuse given the quirks of a given platform".

And as a third thing -- perhaps some discussion on stuff like external hardware interfacing and driving -- I2C surely deserves its own thread as it's darned complex to understand and use if you want high reliability (none of the vendor drivers do error corrections or resync) and have more than one device on the bus.

In other words, this will be focused in pretty closely on doing things with the pic 18f platform, this particular CCS C compiler (because that first order stuff is done utterly different in every C compiler there is for this chip), and at first, this particular dev board. We can then branch out into supporting other PIC chips with different internal and external hardware -- which again has nothing whatever to do with ANSII-C, it's all very platform specific.

At the same time, I ordered a USB dev board, with the intent on finally getting things going with that and the other chips in the PIC family that support USB natively. This will be used for the standard counter project we've talked about elsewhere and perhaps some other data AQ stuff for physics. One potential project here will be to do a multi=channel analyzer in a PIC and at a decent cost for spectrometry. I think Joe and I ordered boards at about the same time and we should get them at about the same time. I'd already bought some pics and usb jacks, but their sample code "has issues" compiling for the smaller, cheaper PIC I want to port this to (28 pin through hole to make it easy for y'all to duplicate). That alone will be a teaching excercise as I show how to remap what they wrote onto less capable hardware and fix all the compile errors to make it work as a super fast serial port which even a windows program can easily talk to, I guess (does C# do that ok?). The fancier types of comm are far easier to support in linux, since there's a userland library there for them, but I don't think there is such a thing in windows for bulk transfers -- you have to pay for MSDN, the driver SDK, buy an ID (for driver signing), and write ring-0 code for that in windows, which has fallen quite far behind Linux in the attempt to patch security on after the fact instead of designing it in.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Learning PIC microcontrollers and C programming

Postby Doug Coulter » Mon Jun 20, 2011 12:28 pm

I should add as an aside, that there is a version of GCC for windows, and I've used it -- it works fine (and is discussed elsewhere here). The biggest issue with GCC is the huge number of possible command line switches, and if doing anything at all complex, learning the silly other language that make (or nmake) uses to have a "project" context. There are some IDE's around that kind of "know" how to do this on various platforms for you, which save a lot of work. The one I'm using on windows for the moment isn't that great (or that bad), but is free, and available at http://www.bloodshed.net. SlickEdit is maybe the best cross platform but not free (and maybe not even available anymore, they got bought - maybe I can burn CD's of it without going to jail?). Quite a few linux free apps can be made to go on windows if you add the unix compliance stuff and/or just recompile. (mingwin or cygwin). But programming for other than the DOS terminal in windows is kind of a huge slog right on the metal -- MFC or the devstudio junk are the tools of choice there if you can afford them. For linux, almost the same story, except for the prices of the tools. Glade will help you put together a snazzy UI and hook into it much like Devstudio....but the resulting program can be made to run on either platform with just a recompile at most (no point including #windows.h in linux and so on).

Now, back to on-topic!

Joe, is it OK if I move this topic where it belongs, under software/homebrew/embedded? Oops, I did it already :o
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Learning PIC microcontrollers and C programming

Postby Joe Jarski » Mon Jun 20, 2011 11:19 pm

OK, I got my kit today and started going through the exercises, making lights blink and so forth. I got through exercise 5, which uses the 3 LEDs as a binary counter. The compiler is pretty easy to use so far, as is the programmer. And I'm getting the basics of the program structure, including headers #defines and #includes. Although, I do need to do a little more reading on some of the statements that I'm using because I don't quite have a full understanding of all the variables that show up.

Now for some of the simple questions...

I know that there's a format for making the source code easier to read by using upper and lower case letters for certain things, but is the compiler *really* case sensitive to anything? I've tested it a little and it doesn't seem to matter, but I'm not sure if that's always true.

In the K&R book, the program always started out as "main()" - with the CCS stuff it starts as "void main()", "void main(void)" and a few other variations. What's the significance of "void"?

That's all that I have for now - it's getting late...
User avatar
Joe Jarski
 
Posts: 231
Joined: Thu Sep 16, 2010 8:37 pm
Location: SouthEast Michigan

Re: Learning PIC microcontrollers and C programming

Postby Doug Coulter » Tue Jun 21, 2011 8:55 am

Case sensitivity varies by compiler -- despite all the "standards" and everyone claiming to meet them 100%, nope, they don't. In this particular case, the compiler can be either way! I'm not sure why they did that but it defaults to not-case-sensitive. For best code portability, I program as though it matters even when it doesn't, though -- it's a good idea that will save you work.
At first, you might want to turn that feature on and get good habits going.

If two names differ only by case -- you've got some real possibility for *human* confusion there anyway, best to avoid that. A convention most use is to make #defines uppercase (which instantly flags them as constants or macros). I think there's a preprocessor directive you can fool with and turn case sensitivity on and off. At any rate, I always pretend it's there, because in other tools, it is and you'd be surprised how often you can reuse code if you write it to whatever is the strictest (nearly all PC compilers are always sensitive to case for example), and using the right case tends to make the code more readable anyway. One trick I use case for is in making names more readable. For example: routineThatDoesThat(). Or for a pointer, DataMemory, things like that. This works to make the code more readable without having to use underscores to separate words -- saves typing effort. There is debate about this (and everything else) with some people (usually not english speaking) thinking that's dumb, but I don't agree. Anything that makes the code easier to read and understand later is good as far as I'm concerned.

A side issue on that one. Never do something like

i++; // increment i

I'll strangle you! Obviously we incremented i, that's in the code itself - the comment should say something about WHY you incremented this variable!

Every "thing" in C is "strongly typed". That means the compiler needs to know what sort of variable it is so any actions it takes are correct when that variable is used in some other statement elsewhere. So if you call something a short int, and later add it to a long float -- the compiler can know how to pull that off. Or if you declare a pointer to something (an address of where it is), when you use that to fetch or store those "somethings" the compiler knows how to handle them -- and how to handle the pointer itself. If you increment a pointer, it might go up by one if the things pointed to are bytes, by two if 16 bit entities, and so on. The CCS compiler is actually semi bit-oriented and it's much more crazy inside than that....In machines with this little ram, you need to do things as efficiently as possible -- and the compiler supports using single bits as Boolean types -- and will tend to combine bit variables all into the same memory byte to save ram. It's one of the things this compiler does very well -- Else C might not be so practical in these tiny things.

In C you can do things like this because of strong typing:

///
// assumes you've declared those routines someplace
float answer;

answer = GetThisByte(WhichOneIndex) * GrabThatInteger(PointerToSomeData); // convert return values from the routines to floats, multiply, and store in answer

//

Since any return value from any routine can be used in any statement -- it's important to declare what kind of thing the return value is so the compiler knows, just like for any other variable.
K&R C didn't demand this and assumed int if you didn't say. For main, this seems silly as you never call main from inside main. However, on something like a PC -- main IS called from the loader, and the return value can be meaningful to a batch file that's calling C programs -- most batch languages have a way to look at the return values of whole programs. (usually zero for no error, and some number if an error that describes the error). The keyword "void" is a special case, which can mean either "nothing" or "anything" depending on the context it's used in. As a return value, it means "nothing" but you can also construct a pointer to "void" which means the compiler doesn't know how to increment that pointer for the actual data type, and you can control that yourself when you use it. In that context, "void" means "anything".

The strong typing is a mixed blessing. So is case sensitivity. One big benefit of either one is that it can let the complier catch all your typing errors, or most of them. For me, that's a big time-saver (and it saves burns on the chips -- and they do have a limit on that which is smaller than claimed). It's a good idea generally to let the compiler find as many mistakes as it can.

I will say this -- when checking error messages from the compiler (you can often double click on the message and get to the line of code that created it in the editor - nice), remember that in C most of what you can "say" can be interpreted as something legal to say - the compiler isn't checking for meaningful, logical statements, just ones that follow statement construction rules. This means the compiler might not detect an error on the line the actual mistake is on, but usually some line later on in the code. So the real poisonous errors, like leaving off a semicolon, or mismatched brackets might only show up as errors at end of file. Ditto leaving off the "" (either one) for a string. This is one place where a good code editor helps tons -- you can usually have it find the other bracket that should match the one you're on, or syntax highlight so if you leave off a " on a string, the whole rest of the code changes color.

You haven't lived until you've spent hours looking for an error (either compile or runtime) that is created by leaving off a semicolon way up in the code someplace....it's happened here, and more than once to more than one programmer. It's very humility-inducing when you (finally) find things like that. And it's an easy mistake to make when cutting and pasting.

As will be driven home -- computers are stupid. You can imagine them saying "you want me to do WHAT? OK, off I go to do it, no matter how stupid." After all, they can't know that setting this or that bit doesn't do something meaningful out there in hardware you added, or some other routine might read, or any of a number of other possibilities (and thank god -- it'd take too much typing to tell it about things like that). I still chuckle about a time when Dale, after editing and re-burning a pic about 10 times in one hour (difficult programming situation) he pipes up in his "I'm a pic" voice and says 'I wish you humans would just decide what it is you want me to do!" Note, if you're burning them that often -- you'll be replacing parts in not very long. That stuff about they'll take many zillions of burns isn't quite true...Hundreds, no problem, but thousands and they start to have bit errors in the code -- very un-good.

I'd bet that they had you blinking lights using their provided "delay()" stuff, which BTW isn't a C standard thing, just a goodie they give you free (and that's one reason other stuff in the compiler has to know clock speeds etc). Pretty dumb, sometimes useful. But here we have all too much control over the computer, and using delay() is often stupid as it just sits in a loop counting iterations and no other code is or can be run during that (other than interrupts). That means anything else that needs attention doesn't get it while you're stuck in a "spin loop" delaying - a really dumb waste of computer power. One of the nice features of my opsys is that it provides other ways of doing that where you can "set" a delay (I call them oneshots) and merely check once in awhile to see if they've timed out, meanwhile checking on other things and getting use out of the CPU that way on those other things. Remember, you ARE the opsys here, and locking up a machine is real easy to do -- and rarely smart or intended. You'll probably even see examples of delay() used in interrupt service routines to debounce switches. AVOID -- there are smarter ways that return control to the main code quicker and make the overall thing much more responsive -- I will provide examples of that as they arise.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Learning PIC microcontrollers and C programming - scope

Postby Doug Coulter » Tue Jun 21, 2011 10:58 am

One thing to know about any programming language is how it "scopes" things. That's a shorthand way of saying "over what space and time is this thing visible, or in existence".
Depending on the machine and compiler, getting this right can do good things for you.

The main scopes that exist (you have control over a lot of this) are project, file, routine. For the moment, lets just look at scope as it applies to variables (it can also apply to any named thing).

At project scope, a declared variable exists always, and is visible and changeable by anything. We call those "globals". They are both useful, and incredibly dangerous. Some people go to ridiculous lengths to avoid them. I don't, but this is a place for some caution -- if anything can change a variable, it might not have the value you expect it to have next time you check. That can be good (if you're using it for communication between things) or really bad, depending. The always-exists issue means if a variable, that memory is always used, so can't be allocated to anything else, an issue always, but especially in a platform where ram is tight. Also, you need to understand that some variable declarations generate code, which runs at various times depending on scope. For example, at project scope the statement:

float SomePreciseNumber = 3.1415927; // get a copy of Pi -everyone can see this and write it as defined here though. Blessing or curse is up to you.

Creates some code that loads up this variable with the number -- but only at startup time. This obviously eats some rom to hold that code, and that number -- you might care, or not.
You can in some cases save yourself some of that via using something like

#define PI 3.1415927

And then everywhere you use PI in code, the compiler simply substitutes the number. Depending on a lot of things (including how often and where you use PI) this might either save code, ram and cycles, or not. Complex expressions in defines are all calculated in the compiler, once, at compile time and take no cycles or space in the target machine, and if you need to use them to make things more obvious, you should -- let the PC do the heavy lifting, once, to save resources in the target project.

Now, if you were using pi all over? The #define construct wastes memory by putting a copy of it everyplace you use the define. Not good. For that, C provides another way:

const float PI = 3.1415927; // every use of PI now just shares this one memory location, const means the compiler will throw an error if any code is detected writing to this
// but note, the protection const gives isn't perfect -- if you have this near an array, and write the array out of bounds, the compiler can't know about that. This is the classic buffer overflow
// bug you see in bad windows code all the time in routines that don't check bounds.

For example you might want to know array bounds for some reason -- there's a lot of ways to do it as they say over in Perl-land. Here's one way:

#define ARRAYSIZE 32 // you might want to be able to change this for everything that sees the array once in awhile as you're building code -- we do it for buffers here all the time
#define ARRAYBOUND (ARRAYSIZE * sizeof(int)) - 1 // because things start at index zero in C, not one, so this is the max legal array index.

int Array[ARRAYSIZE}; // actually declare the array and the memory it's going to eat, and name it "Array"

Some other code can then use array bound to make sure a pointer into an array actually points to the inside of the array by checking the size in bytes and the array start address, no matter what data type the array is actually made of. This is handy if you're checking by looking at the actual number in a pointer to see where it points, which is going to be in bytes usually.

If the #defines are at project scope - everything can use them. If they have internal calculations, those are all done once in the compiler, which can speed things up at runtime.

Going to the other extreme of scope - let's look at the inside of a subroutine. You might have something along the lines of:

int SomeRoutine (int firstarg, int* secondarg)
{
int localvar = firstarg + *secondarg; // add the args
return localvar;

}
This construct allocates a ram location for localvar every time the routine is run, and frees the memory every time the routine returns. This isn't a perfect example, but just shows that this little bit of memory can be used, and reused all over the program. But only inside this routine does the name localvar exist -- no one else can see it at all, and it's gone between runs of the routine, completely. Now, a seasoned C programmer wouldn't do this for the example case, he'd just write instead:

return firstarg + *secondarg; // saves typing and a memory location because the compiler also allocates one for the return value anyway.

The CCS compiler is insanely good at this, it builds a very complete "call tree" that tells it what routines can call what other ones from inside them -- and makes sure things don't step on each other, while saving the most memory possible -- this is a very useful feature in a PIC to say the least. You can, btw, get that call tree to be displayed, and it will help you understand your own code sometimes...nice.

Suppose you want to have some variable only seeable by some routine(s) but stay around? Again, there's more than one way. The classic way is to use some extra brackets to create a scope, like this:

{
int counter = 0; // initialized once at startup only

int countCounter(void)
{
return ++counter; // counts counter, returns new value to caller
}

} // end of special scope that keeps counter statically available, but just to routines inside these brackets

This particular case shows up enough that C provides a syntactically simpler way that acts the same. You can just say:

int countCounter(void)
{
static int counter = 0;
return ++counter;
}

And yet another way -- just define the counter someplace global and it will always be there -- but also can be seen and changed by anything. C gives you a ton of fine-grain control over stuff like this and as usual - it's a mixed blessing. The relevant saying is "C gives you enough rope to shoot yourself in the foot" and truer words are rarely spoken.

To sum up (theres a lot more that can be said, but for now -- keeping it simple) scope determines what the compiler generated code can see from where and when any code associated with intialization could be run (either wasting cycles, or using them wisely, depending).

Another case where scope comes up is where and when do you define and declare stuff so the compiler has for instance, seen a subroutine before the code that calls it. While in theory, the compiler could just iterate over the code endlessly till it found everything, most (including this one) don't do that, and there are "rules" in C to avoid the necessity of that. You have to at least declare something before you use it (it can be actually defined elsewhere). In my case, I tend to avoid having to do "forward declarations" as it's more typing, and needlessly slows things down some (at compile time only, but it's my time too).

Thus, I'll just define all my global variables and subroutines up at the beginning of things before using them -- it's a way to put together code that's easy to read and understand later. This puts the crucial global variables at the very top of the file (so they're quick to find later) and the all important main code at the very end (so it's easy to find later) while all the subs live in the middle somewhere. I avoid writing pairs of routines that can call one another when I can (almost always) as this requires some kind of forward declaration.
Ok, enough English, now to show this in a real language.

// example of a forward declaration
int DoSomething(int, float); // just declares that someplace else you're going to define this subroutine for real

void somedumbroutine(void) // this somedumbroutine is defined fully right here
{
DoSomething (someint,somefloat); / this would give a compile error if DoSomething wasn't declared first.
}

//later on
int DoSomething (int firstarg, float secondarg)
{
// do whatever DoSomething does here

}

A lot of people do things this way -- I really try to avoid it. For both me and the compiler, it's less work to just define things in a good order and not have to use this "workaround" much.

One use of scope in more advanced situations is when you write reusable libraries -- you might not want to make every variable and routine in some library visible to its users for various good reasons. The nice thing about libraries is that it's a nice chunk (if designed well) of very re-usuable code you can pull out of your bag of tricks for many projects. If well written, the user won't have to know much about what's inside, usually. Now, the CCS linker that would be real helpful in using libraries is brain-dead, there are some issues with all those fancy non-C chip-specific directives getting tangled up, and frankly, the PIC is small enough that library reuse is questionable if the library has anything in it not used in this project, so I don't use it! At any rate, most libraries as used are only compiled one last time once they're finished and debugged, and to use one you link in the compiled code, and include a header file for that library that specifies just what the library wants to expose to the public -- which can be very good and save a lot of hassle with naming collisions -- you'll run into those when you start doing anything big and complex.
Good names are real important to understanding later on, but doggone it, some of the good names aren't that unique -- clearArray, MoveData, Init() and so on might want to show up a lot, so scope can make it possible to use same names without confusing the compiler. You just have to be careful you don't confuse yourself.

As usual, there's more than one way, and in my case I make "libraries" that are just #include files....and include them up at the top of the main code so all the declarations are there by the time any of that stuff is called. If I want to use scope to hide some things in that included source, the "extra brackets" trick works, among others. This is a case where what you'd do for a tiny embedded job is a heck of a lot different than what you'd do in a big PC program, where any real handy library is probably already available, dynamically loaded, and can be shared safely across a few running programs as the opsys is clever enough to keep any variables used in a dynamic library as separate copies for each program or process that uses the library -- in a PC it saves space. In a PIC, not so much...C is flexible enough (long rope!) to handle either case well -- but you have to provide all the brains, always. One size does NOT fit all here, embedded is its own specialty and depends more closely on what's available in the platform in use.

This is one reason I don't recommend eating up every book on C programming on the cutout shelf at the bookstore. Yeah, all that stuff is nice to know someday, but some of the more general C tricks that are perfectly appropriate in a PC with "unlimited memory and cycles" are just dumb in a PIC or other embedded device - In the latter, you're smarter to be a lot more "down on the metal" and not use too much "magic between you and the machine" in general. PICs were originally designed to be logic replacements -- just replace a few gates and flops. They've grown, but the basic idea is still the same, and KISS never applied more than in a small SOC system. You are generally just wanting to make each chunk of PIC hardware do just one thing, well...
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Learning PIC microcontrollers and C programming

Postby Doug Coulter » Tue Jun 21, 2011 1:57 pm

FWIW, I just got my CCS USB dev board -- I didn't get the kit, so no exercises, no cables -- but the sample software comes with the compiler support I renewed anyway. (and one factor in that renewal was discovering they'd updated this particular sample a lot since last time). Pretty little board with a nice USB jack on it, the ability to run from USB power (there goes the need for one of the cables) and some IO brought out to a dual row header socket, enough to tinker with or even do most if not all of the standard counter with. I'll start another thread to chronicle that effort, which will eventually result in a board of my own design (much cheaper) and the use of my opsys, which has really spoiled me over the stuff CCS provides. One thing I don't like much about these boards is the use of a 1/8" stereo jack for rs-232 -- saves them some money, but creates hassle as those are maybe the least reliable connectors ever to exist. I might just make a dongle with a real db-9 and solder it right to the board. When you are debugging code and building a system you want the very least uncertainty possible in everything other than what you're testing now, and something like a flakey cable can drive you insane.

Just one opsys example -- their printf() is a blocking call -- nothing else can happen till all the bytes go out....So my opsys contains a "real" Rs-232 driver that's non blocking and interrupt driven (as long as you don't swamp the buffers) for just one example, and it's not that much harder to do sprintf to a buffer and then say "send()" -- and get control back in a microsecond or two while the bytes go out. For reasonably well written code, you'll never wait at all. For example, sending a line of data once a second works nice as long as it's all gone out before the next second - if you know that (by design) you don't even have to check buffer status. It turns out the cooperative multitasking I use in my opsys is the same as that used for the USB code anyway, so no problems there. Ditto various other timing stuff that's "right" in my system, but not in theirs, and it's usually a conceptually minor task to convert (but sometimes not actually simple). The advantage of course is getting more for your money -- wasting fewer chip resources to get a job done means you can get more jobs done, use a cheaper chip, whatever. I guess I still care about that as a hangover from doing all the product design for Valcom, where anything we did got made in at least 100s of thousands, and some millions. Save a buck on each - you've got yourself one happy and loyal customer almost no matter what you charge, and they pay you on time.

At the other extreme, you have the debacle of that Pfeiffer mass spectrometer I own. That one uses a board capable of running windows CE internally, just so they can use C# and DCOM because they didn't want to pay embedded programmers, and the windows guys are in general, incapable of designing a comm protocol -- or even intelligently using an existing one, like say UDP or TCP/IP directly. Even at that, they don't use the WinCE computer to actually do anything but communicate -- all the "heavy lifting" to set the quadrupoles signals and so forth are still in the PC at the other end -- which, being in C#, is slow, full of blocking spin loops hidden in libraries the programmers don't know what's inside of, not very responsive (eats a whole Pentium at a ghz and brings it to its knees) and crashy. Not to mention, DCOM is one of the larger "security issues" in the entire programming world. Lame -- but that's a purely bad use of the available horsepower -- they don't care as they don't sell that many, and don't have a lot of cheap competition (yet -- we might be those guys someday, though). Enough rope....
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Next

Return to Embedded software

Who is online

Users browsing this forum: No registered users and 2 guests

cron