Idea : Ultra fast reading of linear CCD data

Buzby

Senior Member
Hi hippy,

Here is another of my half baked ideas. I'd like to try it for real, but I'm 200 miles away from my hardware. ( Rush packing, I forgot my toys )

Regarding reading the pixels in, your tests show 'for / next' takes 40mS, and unrolled '@ptr= : pulsout CLK' takes 20mS.
How would you like to see a way to drop that to 10mS, or maybe significantly less ?

I think you could use hspi.

spi sends a clock train out, and reads the data in synchronously, so it should be possible to read the digital input much faster. You might need to bit-bang the first CLK, as SI needs to be reset at that time, but the rest is just clocks.

I'm sure this would work fine if you are reading the data as digital, but what if, like me, you are reading the analogue values ?

Time for an even less baked idea !

The CCD doesn't mind having the CLK running all the time, it only springs into action when SI goes high.

So, start HPWM clocking the CCD. ( Do this in the init code so it stays running. )

The actual sensor reading code is :
pulsout SI
@ptr=readadc
@ptr=readadc
@ptr=readadc ... 128 times

The trick is to have HPWM clocking the CCD at the same rate that the string of @ptr=readadc's is filling the scratchpad. You will probably need to select which HPWM phase to use, and you might lose a pixel at the beginning, because of SI. ( There might be a way round this on an 'M', maybe something using that Digital Signal Router. )

Now I know there also maybe issues with the 'unpredictable execution time' of PICAXE instructions, but the odd wrong scan is a small price to pay if you can repeat the process 1000 times per second !.

I won't be able to test these ideas till Sunday, but they're interesting, so what do you think ?

Cheers,

Buzby
 

hippy

Technical Support
Staff member
Ten "ReadAdc B.0, @ptrInc" takes 1.569ms at 32MHz so about 80us a piece at 64MHz. That's a sampling frequency of 12.5kHz. The 128 readings would take just over 10ms at 64MHz. One can sample just 64 in 5ms by running the PWM twice as fast to get every other pixel.

An "@ptrInc = pinA.0" takes 40us at 64MHz, so one could read 128 digital pixels in 5ms, 64 in 2.5ms.

There might be some aliasing, slewing, jitter and repeated pixels but off by one pixel in 128 isn't so bad, should have no more effect than natural anomalous readings would have.

All in; an excellent idea.
 

AllyCat

Senior Member
Hi,

start HPWM clocking the CCD. ( Do this in the init code so it stays running. )

The actual sensor reading code is :
pulsout SI
@ptr=readadc
@ptr=readadc
@ptr=readadc ... 128 times
Ah, like wot I said in post #5 last week. ;)

But no need for a HPWM pin, any PWM will do. And no need for 128 readadcs, about 16 will do (because, with no lens, the sensor optical resolution is very low), so maybe another 8 times faster! Conceptually, 16 readadcs accumulate 128 bits of data, as do 128 x 8-bit digital samples. But actually, the adcs can potentially acquire more (useful) data, because an "edge" recovered from 8 serial b/w pixels can have only one of 8 values, i.e. 3 bits of usable data.

As there is no lens, any particular sensor diode doesn't just "see" the corresponding object pixel (i.e. the line or background immediately below), but also some (many) to each side (depending on the vertical distance between the sensor and the object field). Suppose the adjacent pixel (to the one immediately below a sensor diode) appears offset by 5 degrees, then it "illuminates" the diode by 99.5% of that from the position immediately below (i.e. Cosine 5 degrees = 0.995). Even the pixels offset by 45 degrees (i.e. horizontal offset = sensor distance) contribute around 70% of that from the "target" pixel. Therefore, if the edge of the line is exactly beneath a specific sensor diode, the analogue value will be half-way between the black and white levels (i.e. the point at which we hope the digital data slices the analogue signal).

So we only need to sample the analogue data sufficiently often to ensure that one or two samples have the line immediately below. First, the algorithm would inspect the (typically) 16 analogue values to determine the (average) white and black levels and finds the approximate location of the line. Then it looks at the analogue values each side, where they change between black and white. If the value is exactly half-way, the it knows that is the real position of the edge of the line. More generally, the amount of "blackness" or "whiteness" of the (unfocussed) "edge" can determine how far away is the actual edge of the line (theoretically up to 1 part in 256 of the sampled pixel spacing, so getting better than 1 part in 8 should be "easy").

Perhaps even this might be considered overkill. In practice one might just read the ADC values from the first and last sensor pixels and use the relative briightness to determine whether to move left or right (like erco does).

Cheers, Alan.
 

Buzby

Senior Member
Hi Alan,

Sorry, I missed your post last week. At least I know now I'm not the only one with 'off the wall' ideas !.

The argument for low res v hi res ( i.e no lens or with lens ) is quite valid in edmunds case. The PICAXE ( and mechanics ) is not powerful enough to make sense of a displacement of 70 uM, and without a lens you wouldn't get that resolution anyway.

The downside of using lots of maths to 'improve' the resolution is that it takes a lot of time. If you used a dedicated PICAXE for the job it would make more sense, but that's not an option in edmunds case.

I was more interested in how to drive the CCD as efficiently as possible, as it might have a future use. What that use might be I don't know !.
( I have only used a professional line scan camera in an industrial application once, but that ( and a bit of lateral thinking ) enabled me to complete a major project for a client at 20% of the price that a huge multi-national company were quoting to do the job. I was well pleased with that, as was the customer. )

As it is I can't think of a use for a line scan camera. The 'line follower' I think could be done better with a 2D array of 'normal' sensors, but edmunds has done a good job in such a small space.

Cheers,

Buzby
 

edmunds

Senior Member
The two ideas you are proposing, are those I started with. I used shiftin for the previous sensor, so doing the same with this one was the number one idea. Shiftin turned out to be too slow not to saturate the sensor. HSPIIN I could not use, because I could not get the SPI modes right - the settings I needed were not available for firmware version B.3 that I have on all my devices. I did not think of pokeSFR for some reason at a time. I might revisit this now after reading up in the microchip datasheet there are actually even two SPI capable peripherals available.

I have also tested PWM, but I never got past 'might be seeing some pattern'. I decided back then it was because PWM settings were too crude with picaxe firmware. Like 127795Hz are the same numbers as 130000Hz. Again, maybe there is more resolution available in PIC registers, which I have not tried. Something to look into.


Thank you for your time,

Edmunds
 

edmunds

Senior Member
I have more to report. I got PWM to work to some extent. At least more than I ever had before.

The following code works reasonably well for about 1000 samples. For some reason, it becomes a lot worse as in not usable after that and then drifts back to usable after some time again. This would suggest that there is some lag (much less than a us per clock, likely) that mounts up and finally shifts the read too close to the clock (sensor wants the read approx. half-phase) and thus distorts the picture.

However, this could be true if I would not restart PWM every time. The reason for this is that sensor actually acquires data at full 8MHz to obtain the best possible range. With 100kHz for the actual read, I would saturate it at even very low light levels. With the code I have, it remains to think that PWM frequency, execution time of my shift-or sequences or something in the sensor is not accurate enough. Any other ideas?

Code:
'Acquire digital signal string from photodiodes
    line_state = 0
    pwmout CLK, 159, 319    '100kHz
high C.7
    high SI : pauseus read_lag : low SI
    ConstructLine
low C.7 
    pwmout CLK, OFF
Code:
#macro ConstructLine()
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
    line_state = line_state << 1 | pinAO : pauseus read_lag
#endmacro
read_lag is 10. Possibly, I could increase the PWM frequency or set it more exact with pokeSFR, but for debugging it is easier to change this value.

Edmunds
 

edmunds

Senior Member
How accurate can I expect 16MHz resonator hanging in 50mm of wires be? I mean, as opposed to as close to picaxe IC as possible routed with thinnest possible curved tracks on a PCB. Maybe that is what is drifting?

Edmunds
 

AllyCat

Senior Member
Hi,

The argument for low res v hi res ( i.e no lens or with lens ) is quite valid in edmunds case. The PICAXE ( and mechanics ) is not powerful enough to make sense of a displacement of 70 uM, and without a lens you wouldn't get that resolution anyway.
Yes I agree, the sensor and hardware implementation seem quite appropriate for edmund's application (but I believe there might be a better solution*). However, I think that there may heve been some misunderstandings (or misassumptions) at the "system" level:

When I was first introduced to serious "image processing" many years ago, I was surprised at the emphasis that the experts placed on "temporal resolution", in addition to "spatial resolution". The nominal figures for edmund's project at the moment are a spatial resolution of 60 micrometres (the native sensor resolution) and 50 milliseconds (20 Hz = the target loop/scanning time for the software). That might seem like comparing chalk with cheese (more extreme than apples with oranges ?), but not when motion is involved: We know the (target) speed is 30 km/h, which scales to 100 mm / second. Thus the temporal resolution (moving along the line) is 100 / 20 = 5 mm.

A lateral resolution of 60 um and forward resolution of 5 mm might be appropriate for following a straight line at high speed, but when a bend is encountered the lateral error may suddenly jump to several mm. Furthermore, if one is attempting "navigation" along a complex pattern of 2 mm wide lines, then the 5mm temporal resolution may totally miss a crossing or T junction. So IMHO more attention should be paid to increasing overall program loop speed (even if at the expense of lateral resolution).

Another way of looking at it is that the native spatial resolution of the sensor is (arguably) 60 um, but the temporal resolution is up to 8 MHz / 130 pixels = 60 kHz, or 16 microseconds. So the present application is attempting to use the sensor at its "full" spatial resolution, but less than 1 / 3000 th of its spatial resolution capability! I suspect that a better optimisation is possible. :rolleyes:

* I have an idea how to put the (SMD) sensor into a (pin-hole) "camera" with a volume around 0.5 cc. The advantage of a pin-hole camera is that you can choose (optimise) the resolution for the application, it has an infinite depth of field (no focus adjustment) and has independent object and image distances, so the "magnfication" can be optimised.(in this case to a value less than 1) Therefore, it could scan the whole distance between the wheels in this application, and be scaled up to any larger vehicle. For "competition" line following (when located in front of the wheels), it could even scan wider than the vehicle track, or maybe even look more "forewards". It would also reduce the problem of saturating the sensor.

So that's why I actually purchased one of the sensors, but I'm in no hurry to risk desroying it with my (now) rather dodgy soldering skills, so don't expect any results imminently. ;)

Cheers, Alan.
 

stan74

Senior Member
Soz 4 chipin in,sounds interesting. There's free usb/video camera software for rpi motion detection. There must be some low res stuff for robots somewhere. It might make us/ir sensors obsolete. Range finding is focus ie highest contrast on edges.You get colour recognition to.
 

AllyCat

Senior Member
Hi,

Stan: Do you have a link?

How accurate can I expect 16MHz resonator hanging in 50mm of wires be? I mean, as opposed to as close to picaxe IC as possible routed with thinnest possible curved tracks on a PCB. Maybe that is what is drifting?
By resonator, do you mean ceramic (often looks like a capacitor and/or with 3 pins) or a crystal (usually in a metal can, with two pins)?

Crystals are at least an order of magnitude more accurate, but I wouldn't expect the drift of either type to be seriously affected by 50 mm lead length (but it may depend if/where any capacitors are connected).

However, the SPI/PWM and PICaxe program code all run from the same oscillator so I wouldn't expect drift to be a particular issue. But I would restart the PWM immediately after (or before) sending the SI pulse each time, to ensure reasonable repeatability.

Cheers, Alan.
 

edmunds

Senior Member
Alan: it is ceramic, caps integrated, part number and data sheet on a different machine, but Murata, tiny.

However, I have reasons to believe this might be all my code that is luring me into blaming higher powers like on-chip oscillators and external resonators :). I still have the silly calibration procedure that is not related to how much line I see, but the average voltage of the sensor output. It does not adjust fast enough, it does not adjust in the best way and it is sometimes doing a totally strange thing when I run to it for help when not seeing anything at all. While I'm chasing my own tail with this here and sipping a hot drink I realized there is at least one important thing to take away from today and it is worth posting here.

When I do see the line with light fast pwm + read every 16th pixel digitally method (~1.6ms to read), the quality of the read is only a tad worse, than my digital, EEPROM looked-up (~31ms) method with that being perfect every time. By a tad worse, I mean that it seems, the result of the read can be used without any further processing. An odd read will be 2 or 3 bits off, but it might not matter enough to have a noticeable effect on the vehicle.

Would be nice to try it out, but exposure time (=calibration) has to be sorted first.

Edmunds
 
Top