The picaxe can see!

Brietech

Senior Member
I used a 28X-1 running at 16 Mhz, and wired into a sensor taken out of a Nintendo Gameboy Camera. It gives 123x128 resolution. I sent it back to my computer via serial, and then converted it into an image with Matlab.

I need to play around with the exposure time, gain, etc. to see if I can make it look a little better. Each image is about 16 kilobytes. I want to experiment with writing it directly to an FRAM buffer first (and caching it in the scratchpad, as well), as that would be MUCH faster. Lots of room left for improvement, but it's a cool first step.

Oh, and the camera cost me about $5 on Ebay.
 

Fowkc

Senior Member
Ah, thought I recognised MATLAB axes...

That's pretty damned cool. Potential for robot obstacle avoidance there I think.
 

hippy

Technical Support
Staff member
That is superb, and well done. I am sure there were cries of "OH YES!" and spiralling of arms in the air when you got it working, and you deserve that moment of celebration. You've definitely scored a first here and I'm sure a lot of people will be looking forward to seeing a write-up when you get the time.

It will be interesting to see what images you can extract using the various modes of the camera sensor. It's quite a powerful beast with edge detection and other useful modes.

As a simple serial camera interface for a PC I can see this being very useful, and it probably would need a PC to do clever video analysis. For PICAXE robotics, the lack of contrast may not be a major problem as it may well be possible to determine which pixels show an IR Led and which do not using just a PICAXE. An IR or 'silhouette' tracking camera on a rotate/tilt platform would be an interesting project. This may also be useful for flame detection in fire-fighting competitions and even for micromouse wall detection.

You've opened the door to many possibilities and it will be interesting to see what projects people do come up with. I would say a Gameboy Camera to PC interface via serial is definitely something 'Make' / 'Instructables' readers would be interested in.
 

Michael 2727

Senior Member
Well Done, Congrats.

PC Optical Mouses use Low Res Image Sensors typ-
36 x 36 pixels some up to 64, Streaming Video
posibilities there maybe, Woo Hoo !
 
Last edited:

Brietech

Senior Member
Can anyone make sense of this? There is a setting for "exposure time" in one of the registers, but the scene needs to be perfectly still for the duration of the "capture" while i'm doing ADC reads. It can't be a "per-pixel" exposure time, as that would take a rediculous amount of time (i currently have it set to like 320 uS - although, I think it may be based on the "clock" going in, which is supposed to be ~500 khz). Hmm.

For those who don't know how the camera works, you write to its registers to set up all of the parameters (bit-banged serial), and then set a "start" line. Supposedly, after the duration of the "Exposure time", you are free to clock out all 16k-pixels, and the image data is presented following every leading clock edge as an analog voltage that you have to sample with an ADC. It currently takes me about ~1 minute to capture a photo and send it back over serial.
 

hippy

Technical Support
Staff member
My understanding ( analogy rather than implementationally factual ) is that the array is 'zapped' at reset to clear the capacitors and these then charge up on the exposure to light. The exposure time sets how long the caps are allowed to charge up before being being latched at the the value they've attained.

It could be that each pixel in turn is zapped and then its cap allowed to rise which would explain any requirement to keep the image still during the entire sampling period, although the timing diagrams in the data sheet suggests there's only one exposure time, between Start and Read ( page 17 in the datasheet I have ).

If you can live with black and white only, it should be possible to convert the analogue signal to a digital bit and stream these in using hardware or bit-banged SPI. With 115,200 baud you should be able to transfer the entire 128x128 image in around 200mS, 5fps.
 

Brietech

Senior Member
Interesting thought. Black and white would SIGNIFICANTLY degrade the picture, but that is a heck of a lot faster. Especially with the "edge detection" mode turned on, it might be useful.
 

hippy

Technical Support
Staff member
An improvement on one-bit B&W would be to use comparators and an encoder to give multiple bits per voltage, and these could be selected by a 1-of-N multiplexor driven by a counter clocked from the clock line sent to the camera chip. The start can be used to synchronise the first bit every capture.

Before getting too deep into complex hardware ... 128x128 in 60 seconds is about 3.5mS per pixel. I'm wondering where that time is being used and whether there may not be some scope for source code optimisation ?
 

Brietech

Senior Member
I would welcome any software-only suggestions especially. I imagine it will be much faster when I have time to hook up some i2c FRAM to buffer everything in (1 Mhz vs. 19.2 khz for streaming it out, and only 1/3 of the data vs. my current setup). I think the serial comms are what is killing me time-wise.

Code:
setfreq em16
'gameboy camera software

'clk=out7
'Vout on camera = adc0
'reset = out6
'load = out5
'serial_in = out4
'start = out3

symbol gbdata = b0
symbol gbaddr = b1
symbol clk    = outpin7
symbol rst	  = outpin6
symbol load   = outpin5 
symbol serial = outpin4
symbol start  = outpin3
symbol ready  = input5

start = 0
w7 = 0
clk = 0
rst = 1

clk = 1
clk = 0
clk = 1
clk = 0

rst = 0
clk = 1
clk = 0
clk = 1
rst = 1
clk = 0

'0x00=z1-z0,O5-O0
'0x04=N,VH1-VH0,G4-G0
'0x0A=C17-C10
'0xFF=C07-C00
'0x01=P7-P0
'0x00=M7-M0
'0x01=X7-X0
'0x07=E3-E0,I,V2-V0

'Set Reg 1 = N,VH1-VH0,G4-G0
gbdata = %00000111
gbaddr = %00000001
gosub send_data


'set Reg 2 = C1 Register (shutter speed)
'gbdata = %00001010
gbdata = 0
gbaddr = %00000010
gosub send_data

'set Reg 3 = C0 Register (shutter speed)
'gbdata = %11111111
gbdata = 20
gbaddr = %00000011
gosub send_data

'set Reg 4 = P7-P0
gbdata = %00000001
gbaddr = %00000100
gosub send_data

'set Reg 5 = M7-M0
gbdata = %00000000
gbaddr = %00000101
gosub send_data

'set Reg 6 = X7-X0
gbdata = %00000001
gbaddr = %00000110
gosub send_data

'set Reg 7 = E3-E0,I,V2-V0
gbdata = %00000111
gbaddr = %00000111
gosub send_data

'set Reg 0 = Z1-Z0,O5-O0
gbdata = %00000000  'no calibration (10=+ calib, 01=- calib), offset of -15*32mV for output
gbaddr = %00000000
gosub send_data





clk =0
clk =1
clk =0 
clk =1
start = 1
clk = 1
clk = 0
clk = 1
clk = 0
start = 0



sertxd("waiting...")		'wait for ready signal to go high
waiting:
	clk = 1
	clk = 0
	if ready = 0 then waiting
	clk = 1
	clk = 0
ptr = 0
for b8 = 0 to 127
for b7 = 0 to 127
clk = 1
readadc 0,@ptrinc

clk = 0
next b7
	ptr = 0
	for b9=0 to 7
		sertxd(#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf,#@ptrinc,cr,lf)
	next b9

next b8

sertxd("done")
end


send_data:
	load = 0
	serial = bit10
	clk	= 1
	clk	= 0
	serial = bit9
	clk = 1
	clk = 0
	serial = bit8
	clk = 1
	clk = 0
	serial = bit7
	clk = 1
	clk = 0
	serial = bit6
	clk = 1
	clk = 0
	serial = bit5
	clk = 1
	clk = 0
	serial = bit4
	clk = 1
	clk = 0
	serial = bit3
	clk = 1
	clk = 0
	serial = bit2
	clk = 1
	clk = 0
	serial = bit1
	clk = 1
	clk = 0
	serial = bit0
	load = 1
	clk = 1
	clk = 0
	load = 0
	clk = 1
	clk = 0
	return
 

moxhamj

New Member
This is an absolutely extraordinary effort. The low resolution is actually an advantage as vision processing for things like robots needs to be kept simple. There is still enough data in a picture like this to create a quite sophisticated 3d model. This could be useful as the horizon detector in unmanned aerial vehicles.
 

Brietech

Senior Member
The horizon detector would actually be a cool idea. A friend and I were trying to come up with a use for a video-headset I built out of an ancient camcorder viewfinder, and aerial photography (and navigation!) for rc planes came up. This chip actually has a bunch of built-in filtering features (edge detection, etc.). You can see some of the work someone has done with them here: http://www.geocities.com/vjkemp/gbcam.htm (that is the page I used to get it working). The hardest part was actually being able to turn it into an image!

As a side note, I think a 128x128 sensor will probably be the pinnacle of picaxe-based image processing. More than 16 kilobytes gets difficult to handle on a lowly picaxe chip.
 

hippy

Technical Support
Staff member
I think the serial comms are what is killing me time-wise.
Yup, especially as you are sending as ASCII text with CRLF's. I can understand why you're doing that and it is sensible during debugging and developing and no doubt easier to get the pixel data displayed at present.

Going to raw byte sends will cut download time by 75% or more. You could also send each byte immediately after the READADC. If you use HSEROUT the byte will be sent on its way while the following commands are executing.

To be honest, I'm not entirely sure why you're using any buffering and that just adds extra time setting up pointers, poking then peeking with an extra FOR-NEXT added to do that. I'd roll the two FOR-NEXT loops into one "FOR pixelNumber = 0 TO 16383". I can't really see any gains in using FRAM or any other buffering.

Even at 16MHz that should shave a fair bit of time off. If you can save 500uS overall per pixel that cuts download time by 10 seconds.

It doesn't gain anything in timing terms, and not in your main loop, but your "CLK=1:CLK=0" could all be turned into PULSOUT's which will save code space, although you're probably not short of that.
 

Brietech

Senior Member
For the nested for-loops, i was going under the assumption (may be false) that there was some "setup" time assocated with a serout command (had completely forgotten about hser!), so by doing a couple big serouts, it would cut down on time. Code space is definitely NOT a problem at this point.

I think the FRAM buffer will help because I can write to it at 1 Mhz, so I can read in 128 bytes into the scratchpad (or more, using up the variables too), and then blast it out over hi2c, and then fill it up again. Speeding up the actual CAPTURE is what is important, not getting it back to the computer. I feel like I can read the data in much quicker than I can send it out via serial, so completely eliminating that from the pipeline would be ideal (i think the max hserout is still significantly slower than hi2c@1mhz).

Maybe the fastest would just be to go: 1) read adc 2) write to i2c 3) start over. That requires sending out the slave address every time, though, which would incur a penalty. I do wonder what the max capture time I could get is!
 

hippy

Technical Support
Staff member
For the nested for-loops, i was going under the assumption (may be false) that there was some "setup" time assocated with a serout command (had completely forgotten about hser!), so by doing a couple big serouts
You have a point there. The problem is quantifying how much time any particular code combination takes.
 
Top