Category Archives: Tech

ia64: Plan9, Compilers and ABIs

So, I have my second-hand HP vx2000 (Single-CPU Itanium2 workstation) running in my room.  (OK, this itself is a mistake – it’ll be moved into the home office once I get sick of the added head in my room).

For some bizare reason, I seem to have come up with the idea that trying to port Plan9 to it would be a good idea.

I’ve started studying the architecture and standard ABI documentation and I’m still trying to get my head around little details, but the whole thing seems pretty doable if I beat kencc into shape first.

The standard ABI register usage suggests a mixture of caller-save/callee-save conventions (some of the global registers are available as caller-save scratch) – this should only require minimal changes to kencc as it’s a case of teaching kencc to work out how many extra registers it thinks it needs for any given proc for optimal results, and allocating them dynamically via the appropriate mechanism, and then ignoring their save/restore on call/return.  That itself shouldn’t hurt kencc much (unlike on sparc32, etc, where you need to work almost exclusively in the callee-save model to get best results if you want to use register windows, and that’s fairly contrary to how kencc thinks and allocates registers), but will make context switching and debugging a bit more complicated.

Alternatively, we could just ignore register spill-fill and try to cram ourselves into the scratch registers only.  This would probably sit well with most plan9 developers.

Last (and equally insane option) is to meet minimum requirements for spill/fill (so EFI calls that allocate registers won’t kill us), but allocate all the registers and treat them as caller-save globals

This will make context saves even more expensive (saving 128 64-bit registers WILL suck), but is simple.

Anyway, this isn’t the really hard bit – as far as I can tell, the hard bit is fixing the 9 assembler/loader to produce good ia64 machine code and pick sensible optimisations.

At least I know I’m not imagining it…

I’ve recently had the displeasure of having to update the copy of WANPIPE that we ship with our product at work from the old stable 2.2 family to the beta 3.3 family in order to support their new Synchronous Serial adapter (The A14x family which is replacing the old S514x family).  We use these cards to support Frame Relay communications.  Frame Relay is still reasonably popular in Australia for private point to point communications, and, to be frank, our product with a supported Sangoma card and annual support probably still costs substantially less than the cheapest Cisco with 100mbit ethernet + sync serial for frame relay support and equivalent support.

So far I’m not  impressed with the A142 kit or drivers.

The kit itself is pretty shoddy.  Whilst the card is a nice small dual-layer card which will fit into a low-profile PCI slot (and even comes with the half-height edge-bracket), the cabling that comes with it is atrocious.  The card has a mini-centronics connector with screw-terminals which the Y cable attaches to  (A142 is a two port card, and is the smallest they offer now) and that itself is OK.  The dodgy comes in with the V.35 cable kit which attaches a V.35 to DB25  cable to the DB25-Y cable that you screw into the card.  The main problem being that both the DB25M on the V.35 cable and the DB25F on the Y cable have screw terminals – so you can’t secure the two to each other making it the weakest (and usually highest-tension because the Y cable is short and usually dangles off the back of the system) join in your V.35 cable run.

And then there were the drivers…  After spending a while chasing my own tail because my old 2.4.30 build tree had been damaged (but was still churning out valid modules, just without valid ksyms), I finally was able to get a build of the 3.3 modules that worked.

Then I had to slave out and accommodate the new user-space tools, changed configuration file layout for wanconfig (the WANPIPE stack configuration tool), all to discover that they’ve managed to break DLCI state indication in two places:  First of all, you used to be able to check IFF_RUNNING to find out if the DLCI was active or not, not anymore.  Secondly, the LIP layer (Sangoma’s own WAN stack) reports via dmesg transitions in card and DLCI state.  I did my usual FRAD power-down test to make sure it was even tracing DLCI failure correctly, and LIP didn’t even notice the DLCI had gone silent/failed until I turned the FRAD back on and it was in it’s resynchronising state.

At least I got an email back from Sangoma technical support confirming that they had indeed broken the support unintentionally, and it’ll be fixed next release.  Shame they didn’t mention when that would be.

C: The Preprocessor

Sometimes you just have to step that extra mile and do something completely dodgy.

The termios way for setting baudrates uses macros to represent the bitrate.  When you’re accepting a baudrate from the user, what’s the most sensible way to do it in C?

  1. Get the macro value directly from the user.
  2. Get the baudrate as a string, and then string compare to determine which baudrate you want, and set the flags appropriately.
  3. Get the baudrate as an integer, and then integer compare to determine which baudrate you want, and set the flags appropriately.

People answering 1 or 2 can hand in their C licenses.

So you’re doing integer comparisons, right?  Hey, we can use a switch/case statement for that, right?

So shortly we get:

switch (baudrate) {
case 50:
    baudflag = B50;
    break;
case 75:
    baudflag = B75;
    break;

… etc …

Surely there must be a better way?

Thanks to the C preprocessor, we have a slightly better option…

#define SPEEDMAP(x) case x: baudflag = B ## x; break;
switch (baudrate) {
SPEEDMAP(50);
SPEEDMAP(75);
SPEEDMAP(110);

…etc…

I wouldn’t get too carried away with solving stuff this way though…