by Doug Coulter » Sun Jan 02, 2011 8:34 pm
Thanks Chris, that's actually quite encouraging, on analysis.
Don't know where they found that pure Re (someone else's money, no doubt), but heck, those numbers aren't as good as what I am getting here now at 50kv and a much simpler and smaller setup (with pure W) -- and with far less input power, even in the less-good static mode. Hah! That tiny difference could be grid-to-grid precision variations alone, I get tons of difference vs grid-build accuracy here. Going from 80 mils error to 30 in the build made the output go up on the close order of 10x for me in a more perfect geometry (where it is perfect, in other words, away from the cylinder ends). Next one will be inside 10 mil error. In other words, still horrible by optics standards but also better than before.
We're going to kick some butt and take some names, guys. Or do already if my numbers are correct. I think they might be close enough to almost start bragging...
I doubt the grid material matters that much if it lives under the conditions -- I'm more worried about finding lower secondary electron emission at this point. With one exception, that's what I see here-- material isn't too important; I never tried that hard with the NiCr (changed too many things at a time for that observation to be declared universal -- that was the spiral grid above that failed for other reasons -- crummy geometry for focus for one).
Based on what I see and their numbers, their Q is lower than mine, Richard's, JonR's, Tyler's, by quite a lot (factor of a few at least). I see that same kind of poisser on what I call "failures" here, not that you can assume all that much from that alone -- it's only semi-related as a diagnostic. It could be that making the thing too big is bad, dunno -- working on testing that here, actually -- already tested "too small" fairly well (but not to completion). What they are seeing is what I see when I try for too much current, and hit space charge defocusing, here -- neutrons stop going up fast with current and the curve rounds over (same as theirs). A lot of things have to be right together to get to good "luminosity" at focus, and they've not hit on a sweet spot as I have, is my take on that data.
The "error surface" in howevermany dimensions has a lot of local minima and maxima, so you can't just sweep in one parameter (or a couple) and find the global best spot (same problem as training a neural network), there are all these intermediate peaks in the function where it gets worse in all directions from there or doesn't change, and any sweep looking for "better" or a gradient will end at one of the local peaks -- but the real best might be a good ways off, with a big dip in between. This isn't a simple linear situation at all. For example, I hit the limit at 9.8 ma on mine (tried up to 40 ma) because it just defocuses -- more stuff going through the grid, but much less per beam area at the intersection as the beam(s) spreads out. In fact, the equations tend to indicate it should degrade somewhat before that, so I'd guess there's a little fudge from the electrons, or some other cause for it to even be as good as it is.
Consider the case of an optical telescope, far from focus. All you see is a blur, uniformly bright, no matter what it's pointed at. Changing focus slightly doesn't change anything you can observe if you're way off. In fact, until you get close, you can't even tell if you're moving in the right direction. Only quite near focus (with lenses that can even achieve that when properly adjusted) can you tell, and get it right.
Instead, start way off, and with a lens that is a funhouse mirror (imagine the image from an optical glass lens made from triangular shapes, or some polygons) -- you can fiddle endlessly and never get a decent image. With luck you can get some concentration, but you'll never image to a point.
Now on top of that -- you can't see the actual image, just the artifacts around where there was scatter (the poisser). That's the challenge we all face.
It could be possible to make what amounts to a Fresnel lens out of oddball shapes -- man figured that one out -- a long time after we figured out how to make normal lenses. I suspect history will at least rhyme, here.
In our case, the effective curvature of the effective lens is a function of field, which is a function of spacing of the grid wires. So, if you want a regular lens, you make those circles. If you're willing to accept the idea of a cylinder lens (in any case, remember it's a bunch of lenses, and a bunch of colliding beams), you make the elements straight rods, producing a cylinder lens between each pair of them. Either way, the object is to create the most possible particles per second going through a finite and small place so they can hit -- and the probability of hitting goes way up the smaller that space becomes (much more than linear) -- to the point where if that space was one nucleus wave-function wide -- it becomes 100%, more or less.
You can't get more by just running more pressure, as that implies collisions that defocus things before you get there. More current for a given ion velocity just means Coloumb defocusing. They are trying to overcome basic physics laws with brute force. It's unlikely to be the answer unless they get to a heck of a lot more brute force than is likely in that setup (think the lasers at NIF, with luck).
Well, enough for now, but that's my own philosophy (post a couple beers). So far, if these and my numbers are even close, it looks pretty good for my approach so far.
Posting as just me, not as the forum owner. Everything I say is "in my opinion" and YMMV -- which should go for everyone without saying.