Putting a LAMP stack on a modern Linux

How to make them do what you want, not to rant on about. Slashdot is better for rants anyway.

Putting a LAMP stack on a modern Linux

Postby Doug Coulter » Fri Sep 12, 2014 3:50 pm

I've decided to use a real database and so forth for data aq. It's harder upfront - you have to have a database engine and so on. There has been another problem that has recently cropped up. Due to the insanity of opsys vendors (well, most of them, Ubuntu and Microsoft in particular) to make it all look like a tablet, a lot of the old GUI tools no longer work for programs intended to be same-platform (but work over time), much less cross-platform.

It used to be easy to write native apps for windows (maybe it still is, I quit around the time of .NET) - devstudio had a drag-drop WYSIWYG GUI editor and simple ways to hook events (clicks and key shortcuts and so forth) to code, it was sweet. It's never been as easy for Linux, though Glade went a long way in that direction, particularly if you were writing C++, which is what I used to do. WxWidgets might work, but it's yet another learning curve, and hasn't been updated in a few years....not a good sign. It's possible, but sort of a nightmare to write C++ code that will run on both linux and windows even with a recompile as the opsys headers are so different and have different functionality at the granular level. It's not that you cannot - it's just a heck of a lot of extra work. All opsys have to be able to do more or less the same stuff, after all...they just approach it differently. I would rather leverage the labor other projects have to do to be cross platform for this. (database, gui, gnuplot, html browsers, you name it)

But I've since graduated to perl, which is a lot more cross-platform, and the ideal duct-tape for some kinds of projects; you don't have to generate write-only code, I don't as I may have to come back later and understand it myself. It makes talking to a database a one liner, it's the perfect glue for things that don't have to be super fast and efficient (even though it seems to beat the other interpreted languages hands down). Here's the problem - new versions of linux don't support the version of Glade that creates XML output compatible with perl anymore, so I've been developing with a 4 year old linux in virtual box. The stuff still runs on the new desktops/compositors, so far, mostly - but not always, for example the slick trick I used to make a file browser come up already in the correct directory broke on the last rev. There is way too much backing and forthing to make sure something works and to fix it if it doesn't, though.

So, I'm considering doing this as a browser (HTML, double yuck) interface, perl (which is the original P in LAMP, the other ones, despite wikipedia partisans, didn't exist when the term was coined), MySQL (yes, I hate Oracle too, but it works and there's a few others that work too with little change), Apache (yes, more complex and weird than NGINX, maybe, maybe not) and of course linux. I'm using Mint-17 Mate, as I kind of like the old ways - discover-ability via meus, shortcuts on customized menu bars, task bars, and so on. Sorry, it's hard for me to remember to search for mv if I want to rename something. Of course, nowadays you do that in a graphical file explorer with a right-click...I can't seem to find a right-click on my tablet though, which kind of makes this later insanity of opsys guis moot, as removing it breaks 99% of other apps. Wonder when they'll realize that? There is no write once, run everywhere. Ok, enough rant.

When I went to put in a LAMP stack, I found no end of "help" on the web. The trouble is, much of it is for older versions of everything, and simply does not work now on the recent ones, which I tend to want to use for the obvious reasons - bug fixes, security, and possibly lasting a little longer in the compatibility game.
Tasksel looked promising, but didn't work. FYI.

I found just one useful link. Here it is:
https://www.digitalocean.com/community/ ... untu-14-04

Even this one had two flaws. mcrypt.so has to get in there some way for php to work right, and this doesn't do it. But you can find that and install it (the linux equivalent of a .dll is a .so). Also, you get annoying errors on startup if you don't add a line to the .conf file for Apache, which I did, at least for "localhost". I did this by making a file called fqdn.conf (random name suggested by the original author, who I can't now find to link to) and putting this line in it:ServerName localhost
I then used the suggested a2enconf tool to move it from /etc/apache2/conf-available to /etc/apache2/conf-enabled and got rid of that warning.

Due to having put tellem and tellme (elsewhere here under the "DNS for my LAN" thread) in all my machines, I only need type this machine name into any browser in my network and I get the apache pages from it. Cool.

For whatever reason, the basic index.whatever files and most of any website go in /var/www/html these days, yet any cgis go by default into /usr/lib/cgi-bin/
even though they have all the same privileges and so on (Security by obscurity? You have to be root anyway!). Of course, you can change the byzantine configuration files for all that. It's extra work, so I probably won't. I added mod_perl2, but will probably never need it, as there's only the one user here and speed to startup is not important - and these are fast machines so without it, it's plenty fast anyway.

So this is the first thread/post in what will be quite a long exposition on how we are taking data, how we're storing it, the database schema to make all this 3rd normal form (believe me, it matters and is hard to change later), how we will do the CGI's to detect and control what data aq hardware we have - right now, two digitizers and counters with different speeds/feeds, a scope, audio recording, and possibly video recording), keep it all in absolute time-sync so we can tease cause and effect apart, and all the rest - data analysis, display, multidimensional plotting, retrieving and using the right calibration factors and units, and other junk that's work now - but will make it sweet later on. Plots will no longer have to have custom edited axis labels - it'll be in the database for whatever that source of data was. Calibration factors too, as well as what's hooked to what input on what. A new row in a schema table for a given data aq device handles what happens if you change what's connected, and tools like phpmyadmin make adding one trivial. Oh, and there are a few oops in getting phymyadmin to work, I'll cover them later.

The only sad part about this is I'm doing this instead of pure science, and that most other scientists don't have a handy sysadmin to do this kind of install for them, and refuse or can't do it themselves, so even though this will be fully open source...it might not get a ton of traction (even scientists can be luddites, and I'm not an exception all the time myself). Too bad, it's going to be really sweet. I'm re-writing (mostly just re-organizing) all my standalone perl that did data aq to log files as perl modules, so we can add features like auto-detecting what we have available, and using it in a more coordinated fashion, and to get better time-sync.

Obviously, though Linux can get time from NNTP to around 10us accuracy (and better resolution but it's not enough anyway), if two things come in IO ports at the same time, one has to get read in and put in the database first, so the database timestamps alone (or ordinals) won't quite do.
Most of these things have at least millisecond-accurate clocks - if you reset them all at the same time (again not possible) and don't let them run too long between resets (it all drifts). It's going to be an interesting journey. At least with the scope - 2.5ghz sample rate on 4 channels - some of that can be solved via a screen grab of 4 things at a time, preserving pretty precise time sync, and certainly cause/effect. And I can read in all the screen grabs after a run with timestamps already on them...perhaps that will do. I'm not sure yet whether I'll play the same game with my good video camera, or just use a webcam and have it local and right now, vs reading in the video from the good vidcam storage (also with timestamps). Perhaps I should make both possible.

Comments? Anyone else want this? Features I didn't think of? Now is the time, once it's really rolling it's harder to change the direction.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Putting a LAMP stack on a modern Linux

Postby Doug Coulter » Fri Sep 12, 2014 4:58 pm

Oh, I forgot. Here's what it looks like now on the homepage. I'll be adding links to the real CGI's that will do the data aq work. Maybe some for analysis, but my rotating gnuplot in 4d probably isn't going to go browser enabled, it'll have to stay native, just be adjusted to pull from the database instead of log files.
I will probably add some features to that anyway. What happens now for a Q plot is if we get a couple neutron counts while the power supply is off, it looks like infinite Q and spoils the autoscaling seriously. I've been going through log files and hand-editing those points out for now, which is a major job. I think with SQL we can just make a query to toss the obvious outliers like that.
Screenshot-Fusor.png
Success of sorts
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA

Re: Putting a LAMP stack on a modern Linux

Postby Doug Coulter » Mon Sep 15, 2014 9:05 pm

Screenshot-Data Aq.png
Kind of what I'm thinking as the control for data aq


Of course, I've not even finished the HTML yet here, but this is along the lines of what I was thinking above.
In theory, this will be a CGI that auto-checks the boxes for the present hardware, and maybe remembers what schema etc you last used as well.
It will kick off another CGI (as yet undesigned) that will actually run all that stuff and shove it into the database. Looks like all the realtime plotting etc will still be native, not through a browser, as that's pretty hard to do without eating a ton of extra computer - and not needed, since playback will be possible - in non linear time just like an audio/video editor.

Other pages off the main page will let you view and edit schema for the various devices, which will hold info on what is hooked up to what, the multiplier to get to the specified units, rates, and things like that. There will be a schema table for each hardware device, and the data table for that device will have a run number. A separate "runs" table will have and create the run number (for all the other data tables) and specify which schema was used with what.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.
User avatar
Doug Coulter
 
Posts: 3515
Joined: Wed Jul 14, 2010 7:05 pm
Location: Floyd county, VA, USA


Return to Operating systems

Who is online

Users browsing this forum: No registered users and 2 guests

cron