Going to try the grass on the other side!

BeanieBots

Moderator
If anything it's the opposite Jez. The tokenized code is very clever and very space efficient but it is also very general. When you do a program in assembler, you only use code that is actually required rather than only using part of a general routine. If for example, you wanted to serout something, it is unlikely that your routine would need to cater for inversion or different baud rates but a general purpose routine would be almost useless without such features.
When it comes to higher level compilers, a lot depends on how clever that compiler is. Many years ago I had a C compiler for the Z80. Whenever I used the printf statement a massive amount of code was generated just in case I needed those features.
If you just wanted to set a single output high, you'd do just that. You wouldn't write a subroutine which could be passed a pin number and a state value etc, etc.
Also, there is the bootloader which is not required and many other parts of the firmware that need to be there JUST IN CASE that particular feature is needed.
Bottom line is that YOU are in control.
 

Jeremy Leach

Senior Member
I see your point BB. I have programmed a lot in assembler on Z80 years ago, so know what you're saying.

It's a complex topic though really - for instance if you used Serout a lot and in different ways throughout a big program you'd get to the point where the tokenised method was more memory efficient. Just in the same way as in general programming where efficiencies can sometimes be made by creating general purpose subroutines.

'Code density' was the phrase I was looking for, according to Wiki (and apparently 'Software Bloat' is what bad compilers generate).
 
Last edited:

BeanieBots

Moderator
I think we are all familiar with bloatware:mad:
I miss the Z80 with its 64k of interrupt driven I/O space. Sadly, many of the very useful peripheral chips are now obsolete. They would have made great interfaces for PICAXE. For example the dual channel serial Tx/Rx with onboard FIFO. Just fire the data in and read at your leasure with either interrupts or polling. Also the keyboard/display driver with similar features.
One of the features I liked with VB was the option to compile for speed or code size. I've extremely little experience of PIC compilers but I would assume that a similar feature would be present in a half decent one.
 

Dippy

Moderator
A comparison on (some/all?) compilers is Optimization level.
Some compliers in their 'Help' or Manuals describe what they are doing during optimization or optimisation.
But because you have a lot more control (inlcuding writing built-in functions for your code) then a lot of anti-bloatisation is in your own hands.
As one compiler Help says: "For speed and size optimisation, there is no shortcut to experience."

Yes, you can use the built-in library function for (e.g.) READADC(x), but if you write manually (i.e. setting registers) you will reduce code space and increase speed.
One compiler I know of (not MikroEleketronika which I don't really get excited about) gives access to the library routine so you can see how it has been written in a generalised way to make life easier for the coder. You can also see the 'extra' code generated. When writing your own you can then see the reduction. Your own experience can be used to optimise.

Some time ago I wrote a FFT for a dsPIC using the ME MikroPASCAL and MikroBASIC compilers. Writing my own routine versus library ReadADC() improved speed by a factor of 3 and cut a few bytes off the code. (Not that space was an issue).
I have lost my original code (I blame it on the odd IDE), but this is how you would set up and read the ADC 'manually' from the samples file:
Code:
  ADPCFG = 0     ' PORTB is analogue input
  ADCHS  = 8     ' Connect RBxx/ANxx as CH8 input. RB8 is input pin
  ADCSSL = 0     '
  ADCON3 = $1F3F ' sample time = 31 Tad.
  ADCON2 = 0
  ADCON1 = $83E0 ' turn ADC ON, fractional result, internal counter ends conversion

  TRISB.8 = 1    ' RB8 as input pin
and then read the ADC

Code:
  ADCON1.1 = 1                  ' Start AD conversion
  while ADCON1.0 = 0            ' Wait for ADC to finish
    nop
  wend
  result = ADCBUF0              ' Get ADC value
So, it shows that (unless you pinch all he code you need) you have to read the Data Sheet - and vaguely understand it too :
Still fancy it? A nice 700 page Data Sheet.
 

stocky

Senior Member
"Mainly wanted to play to gain a pin back on the 8 pin PIC for a design - didnt want to move to the 14 pin ver just for 1 pin!"

So you bought a compiler and pogrammer to save buying a 14M?
Mmmm....??? I must borrow your wallet :)

If it was a board space thing fair enough.
Yes it was (chip) size thing...
Complier is free up to 2k
Programmer/dev board does 6 pin to 40pin with support for all in between
full of i/o, leds, switches, lcd, analogs etc etc etc - can do full designs for many things without a SINGLE external component!
:)

My wallet ain't that big - i looked for a while before deciding...
 

moxhamj

New Member
BB: "Sadly, many of the very useful [Z80] peripheral chips are now obsolete.For example the dual channel serial Tx/Rx with onboard FIFO."

What a strange coincidence. I just bought 4 last week from Futurlec. http://www.futurlec.com/ICZ80Series.shtml

Using some very cunning code optimisations I've just shoehorned some radio router code into an 18X. It is fun, but a bit frustrating to have to code always thinking of code space. Am looking forward to my Z80 boards arriving - a hybrid Z80 and picaxe system which (I think) will give the best of both worlds. Never run out of code space - even if it does go over 64k - just chain another program in CPM.
 

BeanieBots

Moderator
Well that's good to know Doc. I see they do the PIO as well.
I was sure both of those had gone. None of my usual suppliers in the UK do them anymore. A PICAXE/Z80 hybrid sounds very powerful. Might have a go myself.
 

Dippy

Moderator
Jeremy: "for instance if you used Serout a lot and in different ways throughout a big program you'd get to the point where the tokenised method was more memory efficient."

- can't see it really Jezzer. You can create subroutines just as easily in compiled code and is assembled as a callable routine. On one compiler I know of you can optionally create 'inline' subroutines or procedures where the subroutine assembler is inserted every time at the calling position - this would produce the effect you describe. Downside space, upside speed and still maintains code readability. Obviously you use it with care and where appropriate!! Most code bloat is down to your skill, care and competence.
But it requires a lot more input from the coder to write stuff. It is deffo a big step and needs knowledge of PIC workings. PICAXE BASIC doesn't need PIC knowledge for most people most of the time - that's its BIG advantage for many people.

I'm not saying Don't do it. I'm saying learn PICAXE thoroughly, learn how to read a Microchip Data Sheet then try some out before parting with the cash. Look before you leap. And I'm still learning.

I've just been revisiting some MikroElektronika dsBASIC. Bit rusty so had a look through the samples. Jeez, some of the samples don't even work with last year's Dev boards. It really isn't good.
 

Jeremy Leach

Senior Member
Yes, after reading your (and BB) comments I can see my thoughts weren't right there. But still, lots of advantages with picaxes esp shorter development times - which appeals to me.
 

hippy

Technical Support
Staff member
One question I'd like an opinion on: My understanding is that picaxes use very efficient tokenisation in the interpreted code. So I would think that it is unlikely that the average person could get a particular chip to 'do more things' by using assembler etc. Is this correct?
Generally I'd say yes - which is why I chose a PICAXE to drive a hard disk when I was playing with that idea rather than a PICmicro.

What PICAXE lacks is speed and program size is limited. You cannot take something out you don't need to make room for something else. You cannot tailor a particular function to be just what you need.

What the PICAXE gains is that all the fundamental facilities are already there and don't have to be coded-up and tested.

You can do more things and better with assembler but that comes at the cost of effort and increased development time so it's a trade-off.

As to actual efficiency of the tokenised code; it's superb for code density ( squeezing a lot of program into a tiny space ) but not so good for speed of execution. With more modern PICmicro's there's scope for improvement and speed gains but that would be a major design change for Rev-Ed.

The tokenised code is also rather inefficient in that it limits programs to very simple commands. To set a bit variable true or false to indicate if a variable is above a certain value it means having to use IF-THEN-ELSE or jiggering about with MIN. Efficient would be able to say "bit15 = b0 > 5".

That said, PICAXE Basic can be thought of as a Macro Assembler for PICmicro and in fact that's the way some PICmicro Basic Compilers do it. They get the gains of not having to extract the tokens and execute them at run-time but otherwise they frequently still call the same routines the PICAXE would after extraction. Compiled Basic isn't necessarily blindingly fast when compared to PICAXE Basic, only when the program is suited to compilation. HIGH, LOW and even simple assignments might be but multiply and divide isn't. IF-THEN-ELSE can often be more efficiently compiled but not without code bloat so it's all swings and roundabouts.

The only time I'd really recommend someone use a PICmicro over a PICAXE is if the PICAXE isn't suited to the task or is too limited; not enough code space, too few variables, doesn't have the speed, unsuitable I/O configurations.

This is of course for, as you said, "the average person". Those who have more advanced skills won't find the same difficulties in using something other than a PICAXE but it's interesting that those advanced uses are here and using PICAXE's because ease of use and speed of development is a major benefit of the PICAXE.
 
Last edited:

hippy

Technical Support
Staff member
Jeremy: "for instance if you used Serout a lot and in different ways throughout a big program you'd get to the point where the tokenised method was more memory efficient."

- can't see it really Jezzer. You can create subroutines just as easily in compiled code and is assembled as a callable routine.
A subject close to my heart at present which is where I've been for the last three weeks and why you haven't seen much of me of late -- writing the back-end for a C compiler and a Virtual Machine to run on another processor. The key to success there has been in adopting a bytecode / tokenised approach.

The issue comes down to how bytecode / tokens compares to native instruction set density. If the instruction set takes three bytes to call a specific routine an optimised bytecode to call the same would use just one; programs mainly using that routine could be three times larger albeit slower.

Most PICAXE's use just 5 bits for their command so that's an amazing code density in theoretical terms. There's a total of around 40 PICAXE variables (bits,bytes,words) and a "HIGH b0" comes in at just 12 bits. Smaller or same size as a single PICmicro instruction and you'd need at least two to load from a variable and call a routine.

So I'd say Jeremy was right, tokenised code is generally more memory efficient.

A halfway house is 'threaded code' where it's no longer arbitrary tokens but pointers to 'what to do' held as a list in memory. It's like having a "CALL xxxx" but not needing the "CALL", so on a 16-bit machine that's 30% smaller, 50% smaller on 32-bit. Decoding time is faster than bytecode so what one loses in density one gains in speed.

So in summary -

* Below byte-sized tokens - Highest density, smallest code space, slowest to execute
* Bytecode - High density, small code space, slow to execute
* Threaded Code - Reasonable density, medium code space, fairly fast
* Compiled code - Lowest density, larger code space, fastest

PICAXE went for the first because of historical lack of code space ( just 256 bytes of Eeprom ) so a sensible decision. This is its inefficiency though on more modern PICmicros with larger memory.

Most compilers should achieve the last wherever possible but real world programs often end up as little more than threaded code with compiled code overhead. Not that it's such a bad thing. MS Visual Basic p-code was heavily criticised for being in that camp but the truth is that if you spend most of your time in library / firmware code running at high-speed the slightly slower 'glue' doesn't really slow anything down.
 

Dippy

Moderator
Some of your comparisons depend on the size of your code doesn't it.

Compiled code - no bootloader or similar firmware space overhead. And no 'extra' time required for overhead processing. The speed difference for a simple high/low is huge.

What's equiv to High b0 is assem?
Bcf TRISB,0,0 (Set I/O)
Bsf PORTB,0,0 (Set Port i.e. the actual High)
--- something like that I can't remember exactly. Ok there are overall settings too.

And the the TRIS has been set by firmware in some cases. In others you set the I/O state.

Speed comparisons are very tricky.
For a given comparable command between PICAXE BASIC and, say, M.E. BASIC can range from huge to not-so-huge. A lot of this is down to using generalised comman libraries and the inability of the compiler to pluck out the wanted bits. e.g. the command dragged from library may include code that is not needed. If you start writing things 'manually' (impossible in PICAXE) then space/speed can improve greatly. I'm not a fan of M.E. btw.

Anyway, this is going on a bit now, so I'll leave you to it.
Horses for Courses for people as well as PICs.
 

hippy

Technical Support
Staff member
I agree a bootloader does eat up space but it also depends on whether one's comparing PICAXE with compiled code or a generic but optimised bytecode solution to compiled code.

The "HIGH b0" in PICmicro assembler, ignoring TRISB, would probably be something like -
Code:
MOVLW   1
RLF     var,W
IOR     PORTB,F
So that's 42 bits versus the PICAXE's 12 or a more general purpose bytecode's 16, threaded code's 28. For the processor I'm dealing with it's still three instructions but they're 32-bit; 96 bits versus 12, 16 or 32, code densities of 8:1, 6:1, 3:1.

Notionally that processor will be able to support programs up to 8 times larger than were the program purely compiled. Reality isn't that generous.

There is also the overhead of the interpreter itself and the library code but as the program size grows this proportionaly becomes smaller. If we've expanded code capacity eight-fold anyway it has much less of an impact to start with. The intepreter and libraries can also be optimised to only include what's needed. As you say, optimisation is an important part of it all.
 

Dippy

Moderator
My brain must have gone, I thought you just used bit set file BSF to set a Port<pin> high.
(And Bit Clear File BCF to switch it off low).
Oh well, it works for me. I'm a bit rusty.
Never mind, I'm sure you're right. I haven't checked efficiency.

Anyway, pubs open, so my priorities have changed ;)
Seven Stars near Wareham used to sell good beer, haven't been for years, so we'll pay a visit tonight.
 
Top