XBee digimesh, possibly that elusive sleeping router

ciseco

Senior Member
Hi Drac and anyone else interested in this kind of thing,

I recieved the firmware a good few weeks ago (then promptly forgot about it), I've not looked into it yet but stumbled across an article that's grabbed my attention so I looked at the maxstream site, wish I knew then what I had been sent, seems to possibly answer a few problems that have been posted recently.

http://www.digi.com/technology/digimesh/

Apparently with this new firmware you can have sleeping routers and it covers both the 2.4Ghz and 900Mhz modules.

I'll see if I have time to have a go with it in the coming weeks, if anyone else has series 1 xbees and would like the upgraded firmware, pop me a mail and I'll send it on.

Also then remembered that the XBee uses the freescale chipset, thats got a motorola HC08S 8 bit proc on it and the zigbee kit, they are ?2.76 in quantities of 250 from farnell. That's a price to get anyones attention :) just one question, anyone ever programmed a HC08 processor?

Farnel do a dev kit for ?17 with a HC08 on it

Miles
________
1955 Dodge specifications
 
Last edited:

ciseco

Senior Member
Just found the white paper on it

http://www.digi.com/pdf/wp_zigbeevsdigimesh.pdf?utm_source=mesh-wp&utm_medium=email&utm_campaign=Digi_Connections_0608

One bit states, sleeping is allowed by time syncronisation between modules and each and every device can route to it's neighbour so in the event of one being down it will route elsewhere. Has no compatibilty with zigbee as a whole and is a completely proprietory solution but sounds like this is what they "should" have had in zigbee. Maybe I should find that time a bit sooner.

"Digi releases mesh networking protocol for battery powered networks" http://www.antara.co.id/en/arc/2008/9/24/digi-releases-mesh-networking-protocol-for-battery-powered-networks/
________
strawberry cough pictures
 
Last edited:

moxhamj

New Member
Hmm. Complicated problem.

I'm not sure any commercial device can do quite what is required, but I'm still looking. Nodes should be cheap. But they should also be capable of running complex programs. The sort of complexity involves nodes working locally and independently to build up a database of the reliability of direct links to other nodes. This table should be available when requested, and the information about all the node links should go back to a central unit (eg a PC), which can then work out the optimum routing path that wakes up the minimum number of nodes.

There also needs to be a fallback protocol where messages go everywhere (this is the one I do have working).

Complex programs on each node involves having upgradeable firmware over the air, as it will probably evolve. That is something that PICs can't really do. I'm currently looking at a CPM type of operating system, where you can log into a node, execute commands, run programs, copy programs, run a program that puts the second node into a router status that then allows hooking into a third node, and so on. CPM is a text based operating system but it is pretty intuitive. A question comes up on an LCD display, and you answer it, and another question comes up etc. And it can be another board asking the questions and processing the answers.

I've got a hybrid Z80 and picaxe board designed now and off at the manufacturers. 64k of ram space, 32k of boot eprom and 512k of flash ram space. It has all the hardware to run CPM. The picaxes are brilliant at simpilfying the input/output - eg analog voltages and serial data. And the Z80 has the big code space and the ability to load and run programs off the flash disk, and also run batch files. Like DOS.

I'm now writing my own basic compiler to run on the board. Oshonsoft's compiler is pretty good but I'd like some more instructions, particularly for strings.

All this will be open source.

Am open to other suggestions...
 

hippy

Ex-Staff (retired)
Has no compatibilty with zigbee as a whole and is a completely proprietory solution but sounds like this is what they "should" have had in zigbee.
It does look more like what it should be. A shame it isn't compatible, but that might not matter too much for people like most of us here.

I may have misunderstood but are you saying this firmware can be loaded on original Mk 1 XBee's ? If so I wouldn't have a problem with wiping the firmware and going DigiMesh.

It's certainly looks a lot easier to use than ZigBee has become with its requirements for co-ordinator, repeater and end-point devices.

I'm wondering what the sub-text is ? That ZigBee wasn't that well thought out beyond point-to-point and multi-casting the originals offered, ZigBee hasn't had the take-up expected ( can anyone name a commercial product using ZigBee ? ), ZigBee's hit a dead-end on the existing hardware, Microchip's own MiWi proprietary stack is or positioned to be winning the battle for market dominance ?

My suspicion is that ZigBee has done well for the hobbyist and low-quantity buyers but in global terms is as dead in the water as WAP and IrDA before. All surpassed by better technology with ZigBee also being a complicated protocol and system to implement compared to a proprietary solution while not delivering what's wanted. A move to a self-configuring network makes more sense, it was what people expected of ZigBee but never really got.

As to the HC08, that's a superset of the old 6800 from years gone by which I spent years coding for. Very easy to use IMO, I prefer its instruction set to the PICmicro, 6502 and 8051. Not as nice as the 6303 and certainly not the 6809 and perhaps not as flexible as a Z80 but a solid workhorse. I couldn't find the board you describe. Not sure how much a programmer would add to the cost.
 

ciseco

Senior Member
Results so far

Download latest XCTU 5.1.4.1

http://ftp1.digi.com/support/utilities/40002637_c.exe

Manually add the new profiles for std and pro to the directory (2 files for each)

C:\Program Files\Digi\XCTU\update\xbee

Make sure your xbee is running at 9600bps

Swear at XCTU for a while, call everyone you can think of....not a clue....spark of inspiration hit "show defaults", now it will program over the top, phew, two hours of my life I won't get back, nope stand corrected just tried a different module and had to punch "clear screen" and "show defaults" a few times before it would program ^%^&ing XCTU pile of ^%&

Rather than PM me I've posted the files up here v8001, have seen there's 8003 about will add it when I find it. I've also put up the latest manual for the 900Mhz jobbie as it's got a section on digimesh, there's not a 2.4 manual yet, still being written.

http://ciseco.co.uk/forum/viewtopic.php?f=31&t=70&p=120#p120

The new device ignores "old S1" transmissions totally, so has no backwards compatibility, so you'll need to flash at least 2 to start playing.

Putting the 2 new devices, one into our interface and one into a dev board, they fired up immediately and could communicate as expected using aPROTOCOL so really happy there. Haven't the time today to start playing with snycronised cyclic sleeping. Have been told the period can be set between something ms to max of 4 hours.

So back of fag packet, a duty cycle of 1 minute with 1 second wake at 3.3v would equate to (would need proving in real life)

59 x sleep @ 10uA (500ua)
1 x recieve @ 50mA (50,000ua)
50,590/60 = 843uA averaged consumption

If thats true (but then as shown before on here my mathmatics might not be correct) it would also be this for 1 hour/1sec

3599 x sleep (35990ua)
1 x receive (50,000ua)
85990/3600=23.8uA averaged consumption

Found out in my discussions that there will soon be a 868Mhz XBee too, not sure if it will be long range like the 900, I suspect it'll be an even lower power job but don't quote me on it, just my suspicion, either that or a 900 aimed more at global std frequencies.


Miles
________
buy no2 vaporizer
 
Last edited:

ciseco

Senior Member
Drac

Dippy and I were musing over being able to program a pixace over the air and suspect there might be a way by using one of the XBee DIO's to pull the line low so to emulate what a picaxe download does. My gut feel is that fragmentation of the serial data will make it fall over but until tried I don't know.

Changing the bootloader could possibly get around this, but that's for people with less abused brain cells and more intelligence than me :)

"I'm not sure any commercial device can do quite what is required" take a look at dust, they ain't cheap though, that's why this digimesh stuff is so interesting, gives similar sort of functionality at almost throw away prices, bound to be a fly in the ointment there always is.

"The sort of complexity involves ......." not sure if the answer is in there but suspect quite a lot will be covered by looking at the latest manual (posted above) I think it might do some of what you need.

Miles
________
roll blunts
 
Last edited:

moxhamj

New Member
Am watching the experiments with great interest.

Some very thoughtful input from hippy too.

Manuka and I are still working on friendcom - they do seem to have the hardware working at low currents and simple drop-in hardware would be the best answer.

Failing that, a synchronised mesh has potential.

Emulating a picaxe download has been discussed many times. I have a suspicion that some on this forum do know how it is done, but even if I did know, I'd still subscribe to the philosophy that it is getting too close to revealing the picaxe secrets, and I think there are good reasons to keep those secret. In other words, don't try to hack the picaxe, because the (tiny) picaxe tax you pay on each chip helps support new development.

Coding a wireless CPM system involves a whole new operating system. I'm happily ensconced in writing a basic compiler from scratch. Am coding about one instruction every half an hour. And, just for the heck of it, why not code using newer vb.net protocols? So the mid$() function from microsoft basic now is written as strings.mid()
The streamreader instructions are nice too. Instead of opening a file on disk, printing to it and closing it, it can be done with the streamreader. Quite fun coding vb.net instructions to machine code. eg
Dim sr As StreamReader
sr = New StreamReader(.OpenFile)
RichTextBox1.Text = sr.ReadToEnd()
sr.Close()

Except that instead of reading to a richtextbox, read to a string array. That string array can be any size, so you can save and load big slabs of data on and off a flash ram. Or send them to another node with 4 lines of code.

Very powerful too. If only pic chips had more ram, it would be worth writing the instructions to add all the string instructions. Driving LCDs are much easier using strings, and I've already got picaxe code translated over to vb.net basic and shrunk down to far less instructions. No more sending out characters one at a time to an 16x2 LCD using wrins and wrchar! Use CLS then Print "Hello" then Print "World".

This is all relevant to low power radio, as low power nodes are going to need a lot of internal intelligence to conserve their power, and writing that intelligence is a lot easier using a high level language.
 
Last edited:

hippy

Ex-Staff (retired)
Wireless download shouldn't reveal any unkown PICAXE secrets; it's a 'break' signal from Programming Editor which kicks the PICAXE into download mode and after that everything is just 4800 baud serial, no need to know what is actually sent or what it means.

Getting the 'break' over the link has been the major challenge ...

1) Physically from A to B, because it's not a normal serial data transmission. It's the same problem seen with USB-to-Serial leads which don't support that. Using the DIO lines does give a mechanism to get 'break' and serial over the wireless link which should work with just diode-mixing hardware, but I'll admit I did have problems when trying to do that myself.

2) Getting the Programming Editor to tell the PC side XBee to do what's necessary to get this 'break' via XBee to the other side of the link. That's the hard part and requires taking the serial the PE puts out, reforming it and passing it on. It can be done by hardware but the nature of the beast means a PICAXE isn't really suitable.

It can be done by PC software but that means a loop-back, serial port controlled by PE into a serial port used by this software and then serial out to XBee, but that's three ports and most PC's only have two at best. It is possible to buld an interface Y-cable to use just two ports, or one can use two PC's.

Ideally a Virtual Serial Port for the software interface and have the PE 'dowload' into that, the software interface using just one port off to the XBee. Trying to find a free and usable VSP obejct to code with is impossible, most are demos which allow only limited comms before they lock-out and need a re-boot, and many don't support this 'break' transmission ( "Grrrrr!" ). I did think about hi-jacking the VSP Rev-Ed use in the PE but (1) that's wrong, (2) the VSP designers thought someone might try that, and (3) I couldn't get it to work :)

I did get a free VSP to TCP/IP application working and pulled data in via TCP/IP but that is a real hack, hard to configure ( or describe ) and had its own problems.

Ideally, upating the Programming Editor to directly control a PC-side XBee is the best way to do it but there's little incentive for Rev-Ed to put in the effort to do that. There's no way for that to be bolted-on by a third-party.

There is a final possibility; getting the XBee SDK and updating its firmware to do what's needed. That would be quite an undertaking though.

So it's entirely possible, just not very practical.
 

ciseco

Senior Member
Hippy,

That's good to know, I'll leave it on the back burner for now, maybe when it rears it's head again I'll come spend some money with you. You have a far better grasp of the challenges. I'm assuming you take paid work as one or two of the others do?

I've been looking at other alternatives to the picaxe and nothing seems on balance to offer anything as easy, inexpensive or complete. I guess it was good fortune I ended up here.

Right, I'd better get on with some work, just investigating a possible extension to what we're doing with power line communications as we want to break away from just XBee/zigbee stuff (yep all that political stuff)

This might surprise some people but we are also planning a 433Mhz solution for back haul between 2.4Ghz networks, but this might change in light of the 900/886Mhz XBee's and their ease of use.


Oh one last thought, commercial zigbee stuff, there's loads around but not many calling it as such because they (the alliance) can't get thier act together in any short time frame to define co-operability so no one wants to stick their neck out, it'll confuse the consumer as they'll wrongly think one zigbee device will "talk" with another.

What they have come up with is for want of an example is something akin to RF TCP/IP, just a transport, it needs a higher layer, just like we use everyday, HTTP, FTP, web services etc, all predefined ways of using data on top of a lower layer. There's little consensus, but they getting there, slooooowly.

That's why we aren't waiting for simple things that sound glamorous like "smart energy profiles" and have come up with a tiny protocol that's platform independent and encompasses lots of different activities so things can be done now, not in two years time. Ripping out the transport and replacing it, still means the devices work :)

IR, 433, zigbee, powerline, it makes not a jot of difference to the device as long as you can present it with clean RS232

Miles
________
sell vaporizers
 
Last edited:

hippy

Ex-Staff (retired)
no one wants to stick their neck out, it'll confuse the consumer as they'll wrongly think one zigbee device will "talk" with another.
One would rather expect that as an intrinsic feature of the technology.

Bluetooth had its own problems while it was a Personal Area Network, only when they got it to the state that consumers saw it connect this to that ( phone to earpiece, phone to PC ) did it really gain any firm footing. Given what it actually does and is used for in practice, compared to what it is potentially capable of, there was a lot of wasted effort which went into its development and considerable overhead now required to use it.

I think one of the problems these days is people are looking for concepts rather than just solutions and particularly looking for concepts they can then control, sell, license and make money on the back of. It's all about money and the greed which accompanies it; it killed DAT, subdued the Mini-Disc and the Beta versus VHS battle should never need have been repeated again.

It seems people continually fail to learn that for ubiquitous success the key ingredients are interoperability and a useful purpose for it.

I've downloaded the DigiMesh stuff and I'll let you know how I get on.
 

ciseco

Senior Member
Too true, it's it's own piece of bloatware masking it's really useful feature, the simple comms side of things everything else is quite superfluous and window dressing. I'd love it if the xbee came with just 10 pins and 2.54 spacing instead of the silly 2mm ones where the connectors are nearly 20% of the cost. I like the fact digi have decided to do this, shows they see the market splitting and people want usability not frills. There's no AES on these yet though, that's going to be yet another firmware update needed. So glad I've not chosen to write anything actually zigbee dependant, I'd be pulling my hair out (not that I have much these days)

Miles
________
roor bong
 
Last edited:

hippy

Ex-Staff (retired)
Just a note on installing xb24-dm_8001.ehx/.mxi from the .zip file ... Watch where the files are actually extracted to !

They need to be in C:\Program Files\Digi\XCTU\update\xbee not in a \xbee_dm24 sub-directory of that - or at least they did for me.

Haven't got round to re-flashing or playing with hardware; thought it was best to read the PDF first and look at the options XB24-BM gives ...

I'm impressed. It seems a step back to the original principle of minimal configuration and 'works out the box' mentality. With numerous nodes in place it's still possible to do single point-to-point, broadcast to groups, broadcast to all and to set up 'linked pairs'. All the complicated configuration has gone which is a welcome improvement.

Unless people explicitly want "ZigBee", I think they'll be very happy with DigiMesh. It might not fit exactly with what Dr_Acula needs but it should certainly be possible to build that sort of system with auto-routing and network discovery, so really a case of if you want to extend the network just add another node and it will sort itself out. What I think most people were hoping for when XBees were first introduced. Assuming it all works, XBee's have just shot up in my estimation.
 

ciseco

Senior Member
Hippy,

Well found, just tried one with 8003 and one with 8001, still works. Then did the other one so both were 8003, still working.

Shame there's no update.txt or something to tell us what the difference in revisions is :(

I agree, hope it actually does do what it claims, just need a couple of "free" hours to go play and see, that's the more difficult part, finding the time.

Miles
________
make a vaporizer
 
Last edited:

hippy

Ex-Staff (retired)
^%^&ing XCTU pile of ^%&

Cannot disagree with that. I've finally got the DigiMesh 8003 firmware running on two XBee's but not without - as is usually put less politely - getting naked and dragging a particular body part through broken glass. I had to go back to XCTU version 4.9.8 to get anywhere. The problem is a compound one -

1) XCTU not being the best piece of software written
2) The Firmware download protocol being flakey or not robust enough
3) Lack of failure diagnostics from XCTU
4) Firmware download requiring physical handshaking for serial
5) No Firmware download support on Rev-Ed AXE210 boards
6) 2mm XBee pinout pitch makes it hard to build download circuits

I consistently ran into "Lost communications with modem" problems with 5.1.4.1 when trying to upgrade the XBee's. An old 4.9.8 worked ( eventually, same "lost" problems initially ) but reports failures on initialising settings combined with a Firmware download but seems to work when writing settings without firmware download.

The most disappointing thing though is ( having set them both up as close to default as I can, with multicast broadcasting, no sleeping ) both XCTU 'range test' and my own 'ping test' software shows that some 10%-30% of all packets are lost; "timeout waiting to receive". That's with one XBee connected to PC, the other in loop-back, TX straight to RX, mode, both at 9600 baud. This is with 6" to 12" of separation and doesn't matter which way round I use the two XBee's.

I did test them both with existing 10A3 firmware using the same setup prior to upgrade just to prove everything was working and that exchanged thousands of packets with not a single loss or error. I've also lost the use of 'Status LED' on both AXE210 but I guess that DIO<whatever> is not configured correctly.

When XBee's work, they work well. Whether packet loss is a problem in downloading or a firmware flaw I don't know yet, but it's not surprising that people like Dr_Acula who is demonstrably more capable at wireless networking than many of us are remain so reticent to jump on the XBee bandwagon.
 

ciseco

Senior Member
I'm just printing your findings out, not got time to try and replecate them now. Hopefully I'll get chance tommorow when I'm at the embedded show to wave it at someone from digi to see what they say.

What rev of board have you got (on it's belly from memory), were they from tech supplies if so they might be quite a few years old. The first lot I bought (9-10 months ago) with the rather &%^% AXE210 boards were 3 years old at the time, wasn't too impressed when my nice "new" shiny kit arrived and I then couldn't upgrade them to a current firmware.

Let you know what happens.

Miles
________
Honda Sonic picture
 
Last edited:

hippy

Ex-Staff (retired)
Hardware version of mine are both 1705, very early editions. Once I discovered how to I have not had that much problem upgrading their firmware.
 

moxhamj

New Member
Some very interesting points there, hippy. Miles - if you can run this past someone from the company please do.

A mesh needs to be low cost, easily (maybe even automatically) upgradeable, low current, and above all, easy to use. The "Apple" philosophy - working straight out of the box. I'm going to continue to put some effort into finding such a thing, and also some effort into rolling my own in case it doesn't exist yet. Would be very interested to see what digi say...
 

ciseco

Senior Member
Ok, I'll try my best to explain, warts 'n all.

I eventually met a guy called Rauul from Digi, I briefly explained what you had witnessed with the packet loss.

I was greeted by what I can only describe as a face of indignation with such ferocity you of thought I?d of asked permission to defile his daughter. His following comments were little better. To paraphrase ?miles you are an idiot? was how it reached me, then to be told ?and if you?d looked at the manual you?d know?.

Now as some of you might be aware I?ve not got the slowest of fuses, but on this occasion managed to refrain from knocking him off his feet.

Thankfully he switched to a far more useful and less patronising mode. In this he stated this is what you would expect from a broadcast and to rely on the device you need to set the destination, along with all sorts of other stuff.

My mind immediately switched to ?well that?s not what I?d call drop in networking? so left before getting into one with him.

As I was driving back I was going thorough in my mind how I could describe this to you all and it dawned on me, he was right and I was being an idiot.

How best to describe this, mmmmm, this is how it figured in my mind as I?m from a computer background.

If we look at the way computers talk, there?s UDP and TCP comms.

UDP is essentially broadcast (same terminology there)
TCP is unicast

When you send a UDP message it?s like shouting in the middle of the street ?WANT TO BUY MY APPLES? some people hear you and some don?t (10-30% message or packet loss)

If like TCP you tap someone on their shoulder and then tell them the message you know they have heard you (0% loss)

Then I remembered one of the features of these devices, unlike simple data cards which echo incoming bytes to the serial interface and you have to decide through programming if the message is valid or not.

These do it for you, BUT the rub or benefit(depends on your view) is that any incomplete or corrupted message is simply dropped.

So this explains a lot.

In broadcast mode the sender XBee fires your message into the air without a care in the world. If another XBee receives that message and it?s valid all is well (70-90% of the time in your test hippy). If however just like with a 433 card (where at least you can actually see it happen) you get a corrupt byte, it works out it?s not valid and drops it (hence the loss)

In unicast mode you specify I want to talk to (back to computer speak) 192.168.0.1 (or however they represent it) and the conversation starts, is acknowledged and retries if errors are found, hence a more dependable method.

Now here?s the issue,

I never relied upon XBee acknowledgements, routing and retries because I program each sensor and the pc software to do it. I use the ?zigbee? network in broadcast mode and mop up the errors and just rely on the XBee doing CRC etc. If all?s not well I like it discarding the packet. For me that?s cool, stuff either gets there correct or never at all. An easy situation to cope with and why we can use IR, cable, tin cans or indeed any other method with our protocol. There?s a guy in NZ using it over 433 already but has to modify it to contend with the issues it brings (think he adds two extra bytes for error checking). I also like the XOR idea already floated here by someone.

If I were to utilize their method I?d have to do a little extra work to say at point of sending to say ?send HERE?. That?s then going to exclude me from these other transports unless Digi release the code so that I could use the same process for IR, cable, tin cans etc.

As with everything in life pro?s and cons.

For me (personally) I still feel the XBee offers much better value than radio cards. Admittedly 433?s can be as little as a quarter of the price, but what they wont do without me writing code is give me a message I can actually depend upon getting there or not, I have to code for gibberish being received (not difficult). I can have multiple overlapping and non communicating networks (quite difficult). Also I can depend upon it?s security and encryption (very difficult and impossible on a picaxe).

My next move is to see if I can meld the new digimesh into what we do, I think for us it will just work because we rely on so little of the zigbee stack and doing 3 retries through software with even a loss of 70% isn?t too bad as 70% of the remaining 30% is 21% and 70% of 9% is 6.3%, so expected overall good transmissions should be around 97.3%.





If I can suggest something to Drac. What would be of great personal interest is if you could write a wrapper that provided a similar mechanism over 433, then embed (or affix) it onto the radio card so anyone could add a picaxe (or whatever) on top of all that (seems like you are already going that way with Z80/Picaxe). Only thing, the cost is then increasing, add that $20 3uA card and it?s now more expensive for less features.

Saving serious power in my mind needs to be time based, this doesn?t need long term RTC accuracy. This is I assume why the digimesh nodes can only go 4 hours max is because they are using the internal MCU to do it and that?s the best they can achieve without a proper clock.

I?ve already mentioned in a chatty mesh those 27mA/3uA jobbies will spend most of their time being awoken by traffic not destined for them or by stray other transmissions, if they stay up for even half the time that?ll only be 13.5mA average.

IF (I stress) it?s possible, bringing a 50ma/1ua device up for 100ms (120 bytes at 9600bps) in every second would equate to an average of 5mA+0.9uA.

If that?s possible, perhaps our simple 12 byte protocol would only need 10ms, so an average of 0.5mA+0.99uA.

I bet in real life I could get no where near this but you see where my mind is going, reducing duty cycle is the way, unless there is a device that can be awoken on an individual basis.

Nothing is ever easy :(

Time for a cuppa

Miles
________
MB/T/X Series
 
Last edited:

hippy

Ex-Staff (retired)
Thanks Miles, and especially for the explanation. But ... it all works perfectly with the old firmware and I've even had fun hiding XBees either side of baking trays and inside saucepans and doing all manner of things to corrupt transmission and apart from knocking the signal down to unusable it's never lost a single packet in the same broadcast, no sleep configuration. Now it's in a perfect configuration and still fails badly. Your scenario holds true, but why lost packets with DigiMesh but not with the old firmware ?

It is interesting that when pinging slowere ( checking RSSI via software between packets ) there are fewer packets dropped. That suggests to me it's a timing / responsiveness issue rather than data corruption.

Maybe it is that there's no transparant error retries or something like that with DigiMesh when there was with the old setup, the problem was there but being masked by technology.

I'll set them up as a one-to-one pairing and give it a re-test and let you know how I get on.

Full marks for holding in your anger, I'd have probably turned him into the first salesman with anal connectivity via DigiMesh :)
 

ciseco

Senior Member
"but why lost packets with DigiMesh but not with the old firmware" dunno why it's dropping more, does RSSI still light (I guess the only easy indication something is being recieved) wonder if there are any debug modes

I'll try doing some tests too, I have some things to try, are you using them in default broadcast mode or have you set a destination. Seem to remember you cant set the amount of retries in Maxstream MAC mode, wonder if it's doing some of it's own, but then as a broadcast it shouldn't. Would need something like wi-spy to find out what's actually flying about.

Timing wise, what happens at different baud rates or is it just message frequency, how many bytes.

I'd really like to prove somethings amiss and get back to this guy and see him explain it :)

There can only be something "it's" doing differently as it's the same radio module.

mmmmmm, by hook or by crook we'll have to figure it out.

He did let the cat out of the bag there's at least 8004 floating about, but not yet released, I bet it'll take a good few months to settle, if we can prove something early on they are more likely to work on it whilst they still have an appitite.

Would be good to get some other people trying it. Anyone?? drac you have some don't you?

Miles
________
Sports Activity Series
 
Last edited:

hippy

Ex-Staff (retired)
Yippity-doo-dah

Success !

1) I programmed the loop-back XBee to have the destination address of the PC sender and errors dropped to between 3% and 8%. I'd expected a 50% improvement but seem to get 75% improvement. I suspect that's because the sender has sent and is still awake and on-air when a reply comes in.

2) I then set the PC-side XBee destination address to the loop-back ( the tightest, paired system ) and got 100% success, no lost packets or errors, on over 15,000 exchanges.

3) Setting the loop-back to broadcast but still sending from PC to its address and same result as in the first example, but that's expected - It is loop-back ( via broadcast ) not a responder ( to the specific sender ).

So, it looks like DigiMesh works and works well providing it's point-to-point, and that is also the case I would say for multi-to-single-point such as sensors to base station.

In practical PICAXE terms, sensor to base station should be okay as above, and if you want to control your PICAXE-robot army sending commands to its specific XBee will also be okay. Broadcast and having each PICAXE decide if a message is for them is however a very bad idea, but that's not the best network design anyway.

XBee does have a mechanism to identify a named XBee and set its address which makes selecting destinations relatively easy and also gets round multi-base station issues. It's a different way of thinking to broadcast everywhere but not difficult.

@ Miles : Looks like the TCP/IP and UDP analogy was right.
 

ciseco

Senior Member
Cool, some much better news.

1. You might be right, could you put an axe on there to recieve, then ack to introduce a delay? but then you had 100% the other way so maybe not worth finding out

2. Oh cool, so indignant man was right

"So, it looks like DigiMesh works and works well providing it's point-to-point, and that is also the case I would say for multi-to-single-point such as sensors to base station."

For how to do true multipoint, I need to get into the manual to find out how to resolve stuff apparently, theres something like (as blokey described) a function a bit like DNS where you first pop a sensor into broadcast mode to get this database, then drop it to multipoint, how it figures it all out is an unkown to me yet.

I'm curoius "Broadcast and having each PICAXE decide if a message is for them is however a very bad idea, but that's not the best network design anyway."

I can't see why it's a bad idea, simplistic yes, but that's a big plus point. Can see it's a struggle to do anything advanced like routing, bridging is possible, reminds me of NetBEUI in a way. TCP/IP became defacto when multiple networks needed to be routed. Can't see that there'd ever need to be such within this sort of wireless comms. Unless you wanted a cloud that covered large areas, with lots and lots of sensors but then you'd have much greater complexity than zigbee entails and I'm not a fan there. There's just not enough bandwidth either, you'd surely have multple segments joined by other means and leave the band free for comms. This is why I'm working towards something like 192.168.0.1,aC00T5---- and 192.168.0.2,aC00T5---- although both sensors are called the same they are on different gateways so are different sensors. That way you could have a light switch in your house that turned a light on in mine even if the two had identical sensor ID's and they would route via IP which we know already works. You could of cause just have wifi'd sensors and miss a step but the cost increases.

Really keen to hear you thoughts as it could have implicationswith what I'm doing.

Miles
________
Hero Karizma R
 
Last edited:

hippy

Ex-Staff (retired)
I'm curoius "Broadcast and having each PICAXE decide if a message is for them is however a very bad idea, but that's not the best network design anyway."

I can't see why it's a bad idea, simplistic yes, but that's a big plus point.


Bad idea for XBee because of this 30% potential packet loss in broadcast mode. Bad idea in general because it wakes up every receiver and ties up every processor when they could be doing something else.
 

moxhamj

New Member
Hmm - this is a thread to really give the brain cells a workout.

Miles - did that rep go to the Basil Fawlty school of sales or something? Or maybe he was actually one of the designers who is very close to the project and so more easily offended? It could be interesting to brainstorm some of the concepts with someone from one of these companies (so long as one's head doesn't get bitten off), but we can brainstorm it here too.

Ok, firstly re nodes deciding if a message is for them, if nodes are on all the time then at the very least they are going to have to wake up and check the first few bytes to see if the message is for them. But a smarter system might be for there to be messages that don't just wake up nodes, but also tell specific nodes to go to sleep for a certain amount of time as they are not needed. In fact, that could be a simple as a one byte instruction and then one line of picaxe code.

TCPIP is useful to study, and packets get routed by all sorts of different routes. However, all routers are on all the time in a TCPIP network, so there is no cost in sending messages via different routes.

With low power radio, there is a cost in waking up a node, and there is a cost in retransmitting a message. Once a network path has been determined, it may be more efficient to use that path for a series of messages, rather than close that path down and open up a new one. And there may also be a need for power sharing within the mesh, so all nodes use their battery capacity roughly equally. So some routes may be chosen that are deliberately longer than necessary, in order to preserve a node that has been chatting a lot recently.

It is also useful to consider not just how nodes work, but how they fail and how they respond as the packet loss percentage rises. If for instance, they respond to a high % of loss by retrying more and more, that will swamp the network with useless packets. It makes more sense for the network to determine its own black spots and report these back to the user, in a simple form like a picture with all the nodes, and a recommendation "put another node here".

I see several types of messages:

1) a message that goes to all nodes. Every node wakes up, and every node forwards it on after a random delay, and every node only forwards it on once.

2) A self generated message from a node to discover nearby nodes. Any nodes nearby would wake up and start some experiments to determine the link reliabilty. This might not need to happen very often - maybe once a day or once a week.

3) A request by a node to open a link to a specific other node.

4) A request by a node to shut down a specific other node for a specified period of time.

5) A request by a node to open a link to a specific other node and request that node become a bridge, so that messages can now be forwarded on to other nodes.

6) a complete download of a file, which might be many kilobytes, and might be routed via a number of intermediate nodes. This would need error checking and packets etc.


I think with these sorts of instructions the network can perform a number of different functions. You can just broadcast a message and it goes everywhere, and I have this working now and it works fine, but it isn't the most efficient as the nodes have solar panels worth $100 each, and I see a node as something more like a solar garden light or a little BEAM robot rather than a huge panel on a pole.

You can work out the black spots in the network. You an work out if nodes have died. You can work out which nodes are right at the limit of their range.

You can design a path from A to B. You can design multiple paths on a PC and upload these paths to nodes so if node C wants to send to node D, it has the best path already programmed in and doesn't need to work it out each time.

You can open up a link from A to B, then tell B to be a router, and open a link from A to C via B. This link could be quite complex. It could be text instructions to a CPM computer, and the instructions could be CPM/DOS instructions like "save a program" or "run a program" or even a complete file which is an upgrade of the firmware.

This network can also grow easily. The very basic "route all messages everywhere" program could be in eprom. This will always work, even if the flash ram/eeprom gets corrupted through a dodgy firmware upgrade attempt. Then you download a new program which is more complex, save it on eeprom or flash ram, and set a single byte so it boots into that new program on startup. The new program could be more complex - it might include the code that works out % errors in comms to nearby nodes.

Ok, if all that is in a commercial product then great. If it is in a commercial product but it is all secret, then not so great. I'm going to keep coding this as I think it is quite possible to do with a picaxe/z80 hybrid.
 
Last edited:

ciseco

Senior Member
Ah, now I see what you mean, bad when you combine the two, got you. My picaxe code does do things whilst waiting to receive but I'm forced to use the x parts for background receive or you get into the old serin hang issue. That way you can also cope with less than all bytes received too (so can use other kit than xbee). Using the x parts is also necessary as the lesser brothers haven?t got enough code space for anything more than really one function. All sensors "should" be able to report their firmware/device type & capabilities, replies to echo?s (like a computer ping) so that?s 4 functions before even coding "get me a reading" or "go to sleep". I've got socket switches and light switches running off 14m's but they only do a single command. So I'm now sticking to 28's for picaxe and dippy is doing me something PIC based on similar packaged chips so the pinouts remain the same.

All receivers are always awake unless you have told it to sleep, that function is dealt with elsewhere, there's no easy way of doing it without syncing which is why I chose a different and easier approach.

Phew, thought you might have had a show stopper. I already deal with both issues but not "on chip", would need extra components and probably a different micro that can multitask.

Drac, hehehehe, I do suspect he was one of the designers, hence the peer down nose "you are a fool" attitude.

Your mind works like mine, telling them to sleep is my workaround. Here's what I do. Let's say it's a temp sensor I want a reading from every 5 mins.

Sensor first powers up, says hello I'm here. PC schedules two commands for it every 5 mins. First being "tell me the temp", then once recieved says "sleep for 4:59". Then "hopefully" when you ask again in five mins (have to put in allowance for no RTC) it's already woken up again, then repeat. Not the tightest of tolerances but works just fine. Trying to sleep for less than a second or two will probably break this method and would involve the picaxe knowing what the exact time is, but that is allowed for by just changing the sensor code, the process remains identical.

Let's also say that the sensor also has a loop that checks the temp and if it reaches an alert point, it self wakes (the xbee) and sends an announcement alerting the fact something has happened. You have to catch this programmatically and the put it back to sleep again if desired, might be able to do it on chip but it's easier at the PC end.

Having tiny flea powered devices is fine if you accept they cant act as forwarders, haven?t a solution to that :(

I have allowed for forwarding but requires that a device be permanently on, can't think of a way around that at the moment with this kit. However it could be attempted with a pair of those 3uA 433 modules and very directional antennas, or a pair of digimesh modules doing their sleep/wake cycle and some on chip buffering.

My idea would be to have micro powered mini networks back hauled with something that might require 100$ of solar panels, trying to achieve both isn?t feasible without losing functionality somewhere else. See dust networks they back haul over tcp/ip and have sold 25 million battery powered units, they have the resources to crack this if it were possible. Any type of power saving decreases the response time which is in my mind more of any issue than permanently powering repeaters.

Any device can talk to any other if it's awake, but you could always queue at the pc end

So essentially it's point to multipoint, doing true multi point is beyond me and needs significant "on board" rules/computation and makes the design much more complex. As this way achieves almost everything I conclude in my own mind getting the extra few % of functionality is not worth the extra cost and effort. That said there's no reason why you can't have networks of more than one type and bridge into a fully meshed one and accept that you have to do more work, this is when the simplicity of design works in your favour. If I designed around digimesh only, I could only ever have xbee powered devices, something I think is a major limitation in it's own right unless they decide to publish their code, something I?ve discussed only briefly but don?t expect them to, if they did, bingo you could have digimesh capabilities on any radio gear, IR, cable, CB, wet string etc :)

Right, must get some work done.


Miles
________
vaporizers
 
Last edited:

ciseco

Senior Member
Maybe not, just seen the temp is only 16.6 degrees in my office so have turned the heating on from my PC before I venture over there :)

Drac, what is the minimum response time you need from your long distance sensors?
________
Subaru 360 specifications
 
Last edited:

moxhamj

New Member
16.6 degrees F or C?

Minimum response time varies. Most of the time, messages are not time critical and could be as infrequently as every 30 mins. But if you are doing downloading of new firmware, you want a node wide awake and forwarding packets on instantly. But that would be rather an infrequent situation.
 

ciseco

Senior Member
Just been playing and noticed that unlike the old software the new one appears to require that the baud rate be identical. The two systems are very different as we are finding out.

Have been chatting with our account manager and said unless they offer up the digimesh code to a wider audience it makes little commercial sense as putting all eggs in one basket on something that only one manufacturer makes it too risky :)

Actaully ditching the extra zigbee features and going lower to 802.15.4 might be a far better option overall and would retain co-operability across many different devices. This is what dust and others have done.

I've asked for trial kit of the lower frequency device. 886Mhz will be the european version of the 900 and will he claims do upto 20 miles but only at 9600bps.

Drac, found something of use for you, 433 mesh, 5.5ma recieve, yet again though a proprietory solution.

http://www.rfm.com/products/pdf/dm1810-434mr.pdf

The more I read and the more I think, although as dippy states it's not pretty, having a very simple protocol that can be thrown over the top of any platform makes real sense, then the transport can be changed easily as companies stop production or introduce better kit. We have for years been able to do http/ftp etc over RS232 (dial up modem connections) before we got into ATM based ADSL that does the same (encapsulate IP).

Please, guys I ask you, look at what we are doing and pinch the command/reply/sleep method for your own RF use. In that way, what we produce only needs the comms section swapping, the same code could exist across all platforms and make all our lives that bit simpler.

Miles
________
Mercedes-Benz Vito specifications
 
Last edited:

ciseco

Senior Member
16.6 C, it's not often warm but hardly ever that cold here in blighty :)

Oh, 30 mins, cool, on demand sleeping via request will suit you down to the ground, no RTC needed just a bit of code at the final recieveing point (in my case a pc but we hope to go micro afterwards). You then wait till next 30 min wake, dont tell it to sleep, change settings then go back to default behaviour. Only down side is having to wait till it's next up before you can change the settings, up side being such a low duty cycle that it makes little odds what the transmit current is as long as you can sleep really low.

Miles
________
marijuana vaporizers
 
Last edited:

moxhamj

New Member
How much for that module?

Be happy to borrow examples but I got completely lost visiting the ciseco website. Sorry, maybe it is just me.
 

ciseco

Senior Member
The dev board? ?26, but you can make your own for nothing, it's just customers dont tend to want to get the soldering iron out :) they are made here so cost a bomb in labour, something further afield could be less than half that price, but we aren't into enough numbers yet.

Most dont want to even program, so I'm compiling everything into one huge library. Might even make a PIC version of some of the simple stuff like counter, temp, light, relay on etc, more work for Dippy :)

All you need is a 28x1/PC or 2x28x1 and couple of xbee's (my code relies on clean transmissions) (think you have already some?) you can then modify it for 433 use with some kind of error checking (would be better to do it "off" chip though). No reason why both can't exist on the same network, you just need to repeat all commands over both RF systems. In fact you could even test using an xover cable to start with on OUTc6 and IN7.

eagle layout is here http://ciseco.co.uk/forum/viewtopic.php?f=34&t=41

I've mailed you some code that I'm just changing from serin style to hsersetup background recieve. Bit messy but it works.

oh dear, my attempt at the website not good then :(

any suggestions??
________
volcano digital
 
Last edited:

ciseco

Senior Member
Hippy, try changing the packetization timeout (RO), looks like it waits 3 bytes length before commiting to a packet, your loop back might have been so fast it was overflowing the available packet length (256 bytes) and forcing a transmission (times 4) see below

Saw this in the manual, which is fine for low packet size, infrequent transmissions (we do 10 per sec max in software and 12 bytes) at 9600 this gives you an effective 2400 due to the increase in traffic, so upping to 115K should allow for 30+kbps, if I read it right.

Broadcast transmissions will be received and repeated by all nodes in the network. Because ACKs are not used the originating node will send the broadcast four times. Essentially the extra transmissions become automatic retries. This will result in all nodes repeating the transmission four times as well. A delay is inserted between each retransmission. Sending broadcast
transmissions often can quickly reduce the available network bandwidth and as such should be used sparingly.
________
extreme q
 
Last edited:

hippy

Ex-Staff (retired)
@ Miles : Thanks again. RO shouldn't be a problem in my case. I build a one byte packet, pass it on it then wait for an echo before doing the next. At some point after sending my byte the XBee will transmit it regardless of RO and a reply should come back well before my timeout occurs. That multiple broadcasting could be an issue as that could take some time.

@ Dr_Acula & Miles : Sleep seems to be somewhat different on DigiMesh especially designed for nodes which are battery powered but I haven't studied it in detail. I get the impression that all nodes will synchronise themselves up so they all wake at the same time, stay awake for a period then shut down. All comms is done in that period and, from a forum post, is a fixed period, not extended if there's a lot of data to be sent. That's a good idea in that it means battery drain can be minimised, no one wakes up when nothing could be sent.

But it would seem to preclude a remote sensor having an urgent message it needs to push through the system forcing node wake-up as it goes. Not much use as a burglar alarm when the break-in message is held up for an hour until all nodes wake up and comms is re-established. As I say I haven't really studied DigiMesh in any detail so I may have misunderstood and there may be other modes.
 

ciseco

Senior Member
Cool, didn't know what you were testing with, thought it might have been continuous bytes out, bytes in and count.

Yes I read it that way too, that's one good (it's done for you), yet, one very bad thing about digimesh, they ALL are down or ALL up. I wan't to be able to control what is down and what is up, this to me is very limiting and means all devices will consume similar power even if they are doing nothing, mmmmmm, small scale will be fine but multi use devices, some sleeping, some chatty, it can't account for both. Looking more like it offers me nothing except maybe a small scale end network that could be routed back onto the wider and more capable one.

As you say depending on the sleep cycle it would be unresponsive accros the whole network. 1 second wakes could be acceptable for an alarm, as you say an hour wont be.

Trying to do proper meshing without any side effects looks to be virtually impossible, there's always a trade off. Repeaters/routers either duty cycle or stay on, can't see how to "intelligently" wake something thats in a low sleep without actually recieving and thats where the current drain is, conundrum.

All I can think of and wouldnt know how to do it, is have very low powered bandpass filtered recievers that wake on a particular frequency, that said spotty youth on a moped would probably wake everything up :)

Co-ordination from one central place is the next best fit, which zigbee will do but you have to get into plenty of nitty gritty to do it and then you are stuck with that platform, seems I stumbled into the least unfriendly solution with what I'm doing, can't say I planned it that way it just evolved :)

Miles
________
justin bieber fan
 
Last edited:

hippy

Ex-Staff (retired)
The fundamental problem is that mesh networks on battery power cannot be easily implemented without cost and considerable complexity and any cheap off-the-shelf solution will not necessarily be ideal for all situations. There's also the issue of where the complexity should be, in the modules themselves or in the microcontroller, and what microcontroller can handle the necessary complexity.

At the moment I don't think XBee and other solutions have evolved enough to do everything which may be desirable in a battery powered mesh network. There's still a divide between what's wanted ideally and what's achievable. Current solutions are all a trade-off of one thing for another and I suspect that it never will be possible to overcome all those trade-offs without radical technology shifts out of our control, the best we can do now is minimise them and design best solutions for specific cases.

The biggest obstacle for the amateur is limited funds and limited technology to use, we forever end up flip-flopping between 433MHz and XBee, each having its advantages and disadvantages and trying to overcome the disadvantages in some other way, which introduces it's own problems. It seems like an "NP-complete" problem, which is where most people's brains simply explode :)
 

moxhamj

New Member
Good point hippy - not much point in the burglar alarm going off after the burglars have left. Nor a robot telling its friends to come and bask in the warm sun and the message arriving after sundown.

Instant on means always on radio. No way to get around that. And that means modules with microamps of current draw. They do exist and I have a few but they are not as sensitive. Probably a function of superhet vs superegenerative.

Anyway, always listening I think precludes picaxe as well, as they draw 3mA. I'm playing around with using tones (generated with picaxe using pulsout so they are of a fixed pulse width). Decode using a one shot tuned to the same pulse width. Should be able to tune out random noise by changing the charge/discharge ratio on the capacitor. HCT logic uses microamps. This turns on a picaxe which in turn might turn on other circuits.
 

Attachments

Last edited:

hippy

Ex-Staff (retired)
@ Dr_Acula : I think you have the right approach; the key is in having 'always ready' receivers drawing uA and those being able to wake-up the PICAXE / micro from zero current sleep when genuine transmission occurs. That's firmly in the hands of wireless engineers, outside of my experience.
 

ciseco

Senior Member
Hippy, found out by accident when I changed one module to 15.4 mode without max header, why in conventional mode they work more reliably. A broadcast sends 3 identical packets prefixed by a single byte. Every sucessive packet then increments this counter, then the other end knows if it recieves 1,2 or 3 copies that they were the same original packet and discards the rest.

The more I think about things, going for really low power can only be done by network co-ordination (requires always on or cyclic routers), end node on demand sleeping and alowing it to break it's own silence if needed. Cyclic sleeping forces a trade off of response to reduced duty, decreased current consumption means less response. As Drac says theres no point in a burgler alarm that has to wait to transmit. The same with non time based cyclic routing where it periodically says "I'm ready, give me what you have". Using low sleep recievers in a mesh, they'll keep waking each other up (much better for narrow beam point to point unless drac can figure out some logic for it). If you allow for retries (lets say 3) you effectivly reduce overall mesh bandwidth by at least a factor of 3 so having as higher baud rate as possible becomes important. Who ever said this was the holy grail wasn't far from the mark, I'm starting to think it's a problem without a reasonable answer, even if the tuning suceeds I wonder if the data rate will be sufficient to actually support a significant number of sensors, then filtering out tones from data will be the next issue, then building a routing table for tones to sensors, the complexity rises with every workaround.

Real downer it's not easily possible :(

Miles
________
Ferrari 125 S
 
Last edited:

manuka

Senior Member
I've been following this self contained thread with much interest, & ponder a few points -

* Lower freqs. will always have more "punch" thru' vegetation etc. No matter how you handle 2.4GHz data comms almost ANYTHING in the way will block weak signals, as of course is endlessly now verified with 2.4GHz WiFi APs, Bluetooth & ZigBee.

* The only real data benefit of 2.4GHz realates to improved B/W. Since very low data rates (2400-9600bps- or even less) are being pondered, there's hence further incentive to stay lower freq. Even VHF at ~150MHz will do better than 433 MHz.

* DTMF (which handles to a max of ~10 ch.p.s) has extraordinary robustness- even a sniff of it on mediocre links will often be decoded. Although Basic Stamps have DTMF-sniff- it's not native to PICAXEs (& I've tried all maner of PWM style coding),but can be handled by classic DTMF chips. These 1980s-90s darlings are increasingly elusive/costly, although PIC based approaches have evolved.

All up it may well be time to reconsider DTMF for simple mesh needs. Dr_A - DTMF follow up?

EXTRA: Aside from powerful UHF "PRS" CB (~470MHz in Aus/NZ), I've just pondered sending DTMF over 433MHz data units,as the likes of the Jaycar units also handle tones nicely! Must try!
 

Attachments

Top