So in this new world of iBoms and meta-iBoms what changes? Well pretty much everything, electronics construction gets dragged kicking and screaming into the 21st Century. Open Hardware production becomes much more simplified and distributed. But why should Octopart do such a thing? Well they could pull a Google on the Electronics supply chain for a start, that’s got to be worth something? We get great footprints and metadata, our opensource tools (Kicad/Geda) would improve significantly I would expect it would be in the interests of say an Octopart to invest into those tools and help devolop them. I could even imagine Octopart buying Github and integrating project management into the whole iBOM, pretty soon it would mean that the buyers would start stearing the component manufacturers/vendors indicating whats needed to be made to make our lives easier rather than the dictorial arrangement we currently have with them. It will also change how things are manufactured in general, with the combination of this and 3D printing technologies (RepRap et al) things could be produced nearer their destination rather than being shipped half the way around the world as they currently are (highly in efficient). That sort of change could make huge differences to the future economies of not just electronics but manufacturing itself.
Ok so say we have iBOMs, this in itself could be extended further to make it even more powerful. Imagine that all of the items in an iBOM could be iBOMs themselves from individual components through to component modules and entire projects. Then we would be able to assemble modular constructions of arbitarly complex parts we would even be able to create Meta Projects with meta iBOMs. So how do we get there, how do we get components and modules as iBOMS? Well the modules could be fairly simple as they are in control of those creating the OpenSource Hardware projects (and opensource sofware iBOMs), the tricky piece is the individual components. Well I have several ideas here; if Octopart were a third party iBOM operator they could start creating iBoms for all available components. Sure this sounds like a lot of work and they probably couldn’t do it all themselves, they would need help from the community. Here is one way they could incentivise the community. They could create a standard component description file (a component iBOM) that also included additional useful information. This could be used to not only identify the component but also produce/derive the various footprints for the component using a standard opensource tool/script. By doing this the iBom file for the component could be used by various CAD packages to bring in valuable footprint databases much needed in the community. Octopart could create a database of popular component iBoms for the open source community (ones commoly used like Atmega328s) to kick start the open iBOM DB process. That would encourage their use and also encourage others in the community to start adding their own iBoms back to the open iBOM database (basically component iBom files on say github). If the iBOMs were done correctly to also capture contextual information like the component usage (used in production) and quality (peer review/rating) perhaps a contextual metabase over the top could add qualative selection criteria magnify the iBOMs worth, this would provide confidence in the iBOMs use for a designer and proliferate there use.
Part 3 coming shortly..
iBOM Part 1
I found my self in conversations recently around the progression of OpenSource Hardware development and moving it’s production forward. I have long been interested in distributed open production and have talked about this for some time. In a recent conversation with one of my colleagues (Ken) about how his project Nanode could be moved forward and made more accessable to a larger audience. We were wondering in particularly about solving the production problem, in the nanode’s case it was kit based. We came to the conclusion that it would be cool if we could produce a BOM as part of the project that would be more enabling to potential participants in the project. I suggested to Ken the idea of an hyperlinked BOM, in it’s simplest form being a link to a preloaded basket at different distributor suitable for each geographic location. After thinking about this some more I came up with the concept of the Active or even the Intelligent BOM (aBOM/iBOM). with an iBom you basically have a hyperlink (hBOM)to a web based service which takes a standard BOM, creates multiple baskets and chooses components from a range of suppliers and best choices. It could also offer basket optimisation like cheapest, or fewest suppliers etc.. dependent on the BOM contents and or quantities/preferences you might provide. For it to work you would provide a standard file format BOM text file in your project repository (on say github) you could then create a REST based link to the thrird party iBom provider e.g.
Whe the user clicks on this link they are taken to the third party and presented with various competitive baskets to buy the projects components (or vitamins as RepRapers call them!). You could even produce PCB with the URL (even QRcode/barcode) on them so folks can populate them more easily, or produce the PCBs themselves and then populate using the iBom.
So who could provide such a third party
I will expand this further in Part 2
Would love your thoughts as usual..
So in part 2 ‘OpenSoftChip’ OpenSource Hardware (OSH) a way forward I provided some details on XCore from XMOS and expanded on why the approach beats Microcontroller (MCU) + FPGA for DSP like applications. In part 3 here I would like to cover XCore as a primary candidate for the Amino project, so lets remind ourselves of that projects initial aims:
Modularisation – A modular topology enables common components to be snapped together using composition, allowing focus on just the custom features of a given project or task, it also reduces complexity and leads to faster project turnaround.
Standardisation – In order to have modulisation and composition as well as reuse, standardisation is required via opensoource implementations made available for testing, production, modification and experimentation.
Digitisation – Opensource software is perfectly digital it’s reproduction is as simple as copying bits, hardware isn’t so simple, but the more of it that can be digitally expressed and rendered the easier its reproduction and the more accessible it becomes to a larger audience.
Reuse – Being able to reuse as much hardware and software as possible reduces consumption and is more environmentally friendly. Common modules or components can be assembled at reduced cost minimising overlap, they can be reused time and time again for experimentation and prototyping. Hacking culture often seeks to reuse, mashup and redefine items for use elsewhere, design should embrace this modern form of cultural reuse.
it may also be useful to refer to the historical framing of this from the Open Hardware Production post.
Lets tackle these with XCore at the center of Amino and perhaps makes some comparisons vs MCU/FPGA along the way.
Ports & Expansion
Originally to provide modularisation we extended the shield concept into a bus with ports. The MCU+logic would provide simple 8 bit busses for expansion. Although this was an improvement on the shield concept, by enabling multiple hardware modules to be used together, limited MCU pin numbers combined with hardwired functions made this an exercise in juggling and resulted in significant compromises. Using XMOS XS1 series soft chips provides significantly more I/O pins and ports which are dynamically reconfigurable, these are closer to an FPGA than a MCU. This flexibility enables an Amino development board to have many I/O ports which can take on multiple personalities/functions as dictated by the add on module hardware and a corresponding software driver*. Its worth noting that we are only using and defining digital I/O with the new designs, analog is added via modules to provide maximum flexibility.
Ports – Response & Control
In addition to the I/O modularity, we also get solutions to some of the more complex issues. Dealing with I/O in a timely manner using MCUs requires the use of interrupts which tend to be limited on MCUs, often logic is added to provide flexible port I/O response and management of interrupts. With XCore’s event driven I/O architecture these problems are eliminated completely and become part of the software and the module itself.
Ports – Special functions
Unlike MCU which use dedicated pins for special functions such as SPI,I2C etc.. XCore defines the pin functionally in software itself as a driver. Thus such functionality can be loaded at runtime according to the module requirements. This may seem counter intuitive at first compared to dedicated hardware blocks within an MCU. But it means you can have any number of pins dedicated to any number of functions rather than being limited to what’s hardwired on the MCU. For instance my application and its modules could require lost of SPI channels or UARTS, XCore can handle these requirements by dynamically loading the functions as required.
Ports – Dynamic Adaption
Even though I have not worked out where such an idea could be used in practice, an XCore could effectively change its pin functions during runtime to adapt to hot swappable modules or even complex modules that actually change there own functionality at runtime. This is an area that would be worth examining in the near future to take Amino to a completely new paradigm.
The physical pins will likely be grouped into 4/8 or 16 bit segments to provide standard pinouts and polarisation, logic will likely be at 3.3/3.6 volts to accommodate a wider range of modern peripherals not capable of 5v operation. The rest of the standardisation is defined by the software drivers and a simple XML configuration file. In addition to the regular Port expansions there will also be Link expansions to enable units to be interconnected to deliver arrays for more complex computing requirements. Debugging would be achieved using JTAG like schemes which may also be available on board via a USB interface. The programming environment will use a standardised toolchain and all hardware and software will be opensourced (there may initially be some limitations until all of the new toolchain pieces are completed). Because of the software nature of the standardisation an Amino board can be updated in the field to support any changes and or patches to the standards. Standards will be governed by full opensource implementations, these will act as de-facto standards and anyone can reproduce them in an opensource manner. This is key to any potential opensource distributed production tenets and as such will be encouraged to enable development and innovation across the Amino project. I am also looking at what can be included in the standard for testing using loopback and maybe even virtual instrumentation. Here standardisation enables replication to a given operational specification, important for opensource distributed production.
Here we are again playing to OSH strong points as illustrated by Arduino. In this case however we take it even further, with the digits controlling much more at a much lower level and with a much greater capacity for programming. Because such a large part of Amino using XCore is digits it is easily replicated and shared. In fact building a large opensource code base is the primary task for Amino’s success and is the number 1 focus right now. It will also allow new participants to build on the ‘shoulders of giants’ and lean on the wealth of the commons.
A great deal of XCore Amino fusion is really just software, reuse of bits is trivial, so much so that it could actually be reusable at runtime which is an interesting concept. Also because of its greater modularity and more flexible core its uses multiply. Thus it can be plucked out of one project or design and placed straight into another. I am also tempted by a lego like composition that enables projects to be completed with multiple units increasing reuse even further.
So the combination of XCore and Amino aims fuse together nicely, what’s more the XCore blows the other candidates away when it comes to the key tenets of the project and thats why we don’t have the MCU + FPGA deliverables, there really isn’t any point. Instead I am focusing my time on developing the software architecture and standards alongside several XCore based boards in Fast, Faster and ‘To infinity and beyond..’ flavours. At this point I am developing using existing hardware development kits from XMOS themselves, and will have the software operating on several of these first. Plus I will also be making sure it runs on a stamp like board, possibly Omer’s Stamp board in the very near future.
As usual if you are interested in Amino or XCore let me know, if you want to help there is plenty to tuck into. Obviously we will be chewing the fat around the board and software designs as we post here so please contribute and let us know your thoughts.
At some point I would like to add a part 4 to OSH a way forward concentrating on ‘infinity and beyond..’ and where that could take us, but right now I am still gathering thoughts around how that will function and how it can be managed and how we could use composition to solve more complex projects, so watch this space.
*Note I use the term ‘Driver’ here but really it is a simple software module that can be dynamically loaded, driver is a little overkill but conjures up the basic idea.
So this second post would be a good place to explain where we are coming from with regard to moving OpenSource Hardware (OSH) forward, also see Open Hardware Production for a historical background/context. Obviously many folks working on many projects are all contributing to moving OSH forward, but my primary concern here is what started out as the Amino project but has expanded beyond this into more complex open hardware projects with significant performance requirements. The Arduino is a great example of successful OSH and it has opened up 8bit embedded development to the opensource world. Arduino represents an approachable platform for anyone with only modest programming knowledge to get a taste of what OSH can achieve. Amino began to see if that approach could take things to a much higher level, to bring much more within OSH reach, challenges not practical on 8Bit Microcontrollers.
Initially I and others envisaged the combination of microcontrollers (32 bit) being combined with FPGAs to provide a more powerful OSH development platform that could tackle Digital Signal Processing (DSP) applications like audio,video,voice,gesture,AI as well as opensource robotics and vision projects. In this model the stuff that has to be processed quickly is handled outside of the micro by the FPGA, opensource modules would be effectively loaded in suitable for the task : MACs,FFTs whatever. The trouble is FPGAs tend to be built around proprietary IP and tools, even though there are Open Cores and GCC based tools getting this stuff to play nicely together is very difficult. Add to this the complexities of HDLs, VHDL/Verilog,C/C++ models and simulators and pretty soon you need to be an expert to do anything useful. Even if one could modularize the FPGA parts integrating back into the development stream with the microcontroller codes add yet another moving target. I am not saying it is impossible to achieve the FPGA/Microcontroller marriage but doing so in an OpenSource manner is virtually impossible. One could also integrate the controller into the FPGA itself but often this modus operandi is again limited to IP rights and licensing issues.
That is when I figured that rather than using traditional Logic blocks within an FPGA, instead use super fast mutithreaded simple 32bit cores (or multicores) able to achieve results beyond microcontrollers and well into the FPGA applications bandwidth ranges. The theory goes like this ; rather than building VHDL rearrange the problem into a programming problem using more familiar programming languages like C/C++ with added concurrency constructs. In fact there are examples of this already within the FPGA world which are then translated back into logic blocks or HDLs. In this case however no translation is required, we assume that the ‘software chip’ has enough threads/cores in order to execute the concurrent code within the bandwidth envelope. It also makes simulation easier as one isn’t needing to convert into exotic HDLs and deal with widely varying latency/timing issues. One of the other advantages of this route would be the learning curve and the ease of use and entry for programmers as opposed to traditional electronic engineers. Another major advantage is that it could use proven opensource toolchain like GDB/GCC/Eclipse to provide a good overall OSH environment. What is more this model builds on the success of the Arduino where the software is the hardware approach that has proven so popular.
So is it possible to achieve this feat, is there such thing as a ‘soft chip’ that is capable of tackling at least modest DSP/FPGA applications? and can one build an opensource toolchain around it? What about the power requirements if its tens of watts then its not practical? Well the parallax propeller is an interesting candidate but fails on delivering an opensource toolchain. Its programming leaves a little to be desired along with its performance due to going the interpreted route, although incredible results are possible using its assembly language, its a good start but we can do better. We are in luck however as the XMOS XCore range of processors and development kits offer what I have enumerated and much more. It may also be useful to check out the heritage of XMOS and its founders, you will discover things like the Transputer and Occam, ideas before their time but perfect for our emergent issues and challenges. So this is no fly by night idea, many of the XMOS technologies and ideas have solid research and experimental application footing spanning a couple of decades.
More importantly XMOS have taken their lessons from INMOS and it’s Transputer/OCCAM and added much more to focus the technology for the current era. In case you haven’t heard, concurrency is the next programming language killer app, I’ve personally spent the last couple of years coming to terms with it using languages like Erlang. It is amazing how much easier concurrency is if a language has concurrency at it’s foundations and within it’s primitives. Languages like C/C++ make the transition exceptionally difficult and fraught with hidden dangers, liable to trip up even the most experienced developers and engineers. XMOS have tackled this by creating a language called XC (PDF), which looks a lot like C and will help in the initial learning curve for existing mirocontroller programmers (XCore also supports C/C++). XC however also has roots in Communicating Sequential Processes CSP and includes primitives that not only provide concurrency but also simplify event driven programming for hardware. Rather than having to come to terms with complexities of Interrupt Service Routines (ISRs) and the multiplexing of them using a Realtime operating system (RTOS) one can use an XC based event driven approach to solve concurrent hardware processing simply and logically. I would say it is almost natural in the way one uses events instead of the engineering abstract Interrupts found on traditional microcontrollers. As for the performance of the XCores themselves even the basic single core L1 chip (<$5 in quantities) zips a long delivering 400 MIPS across up to 8 concurrent threads, with further models such as the G4 (<$14 in quantities) containing 4 cores each with 8 threads per core or 1600MIPs in a 11×11 mill package that doesn’t need a heat sink!! And if thats not powerful enough for you you can build hypercubes of G4’s to rack up the proceessing power you require e.g the 25GIPS XMP-64!
As a bonus for hardware hackers, the XSI chips come with lots (I mean lashings) of I/O ports just like an FPGA only better!
But one of the key features about XCore (and XMOS) is their opensource friendly approach. I don’t mean just paying lip service, they actually developed XC around the opensource LLVM project which I have talked about before and where incidentally I first found out about XC. Not only that but XMOS are committed to a complete opensource toolchain including eclipse which makes it fit into OSH snuggly. Further more, initial conversations with folks at XMOS and within their communities, have shown that they are excited by the possibilities that OSH and XCore together can achieve in 2010 and beyond, I will have more on specifics later in separate posts.
A company with this pedigree, and such timely technology including good opensource tools and a community approach is just to good a chance to miss IMHO, we should take advantage of it and take the technology places others have only dreamed of until now.
In the next post (part 3) I will touch on Amino and how XCore can help deliver some of its goals, and perhaps paint a picture of how far we can take the idea..
So 2009 passes and 2010 begins, it’s goodbye to the ‘noughties’ and hello to the ‘teenies’? For Folknology however the start of 2010 is an important milestone and marks the transition away from software only projects, everything from here on out will be a combination of software and hardware, we are not taking on any new software only gigs. It is also the start of potentially fascinating decade of possible innovation not dictated by the Microsoft/Intel unnovation, but rather by emergent opensource hardware and software combinations that will bring into existence completely new ways of building information technology innovation.
As part of my research over the last two years at Folknology I finally have some of the pieces of the Opensource Hardware (OSH) and OpenSource Software (OSS) jigsaw in place, enough to begin building a much bigger picture. This also means I can focus in on some of the critical tasks that are required to move Folknology’s OSH plans forward. It is always good to sharpen one’s focus so that the building can begin on a solid foundation. But before I enumerate where the labs are heading I would like to give you an update for the last 3-4 months where it has all come together.
Whilst working on Amino at the end of summer 2009 I met with and researched microcontroller vendors like TI/NXP and STM etc..those conversations reshaped the initial plan for the project, I ended up dropping STM and looked at porting to NXP’s ARM Cortex M3 with a view to later settling on NXP’s emerging Cortex M0 platform. The reason for the change was based around cost/performance/value propositions that the M3/M0 combination offered. NXP was the only vendor that could provide pin compatibility all the way through along with tempting pricing. More importantly NXP had a promising story around opensource toolchain support (based around OCD/GCC/Eclipse). So why aren’t I announcing the proverbial ‘Amino delivered using M0/M3 NXP parts’ win post? Well the tools/parts I needed to follow through didn’t show up within the expected timeframe, this wasn’t any major slippage, rather just a short delay and poor follow up by the vendor. The delay was just long enough to give me thinking room to explore the upper bounds of Amino as projected into the future, the breather enabled me to explore some ‘what ifs’ particularly with respect to the higher end applications around audio/video/music and other general DSP like requirements. I had always planned on adding FPGA modules to help handle these more bandwidth intensive applications of Amino. It was in this period I started hitting road blocks, the FPGA market place is a mix of competitive silicon production of increasingly high densities, interlaced with numerous business models based upon intellectual property (IP). It is not straight forward and there appears to be little appetite in the industry for an opensource approach to innovation. In fact the more I spoke with industry proponents the more alien the concept of opensource hardware/chips became. I was just getting to the point when I was ready to kick down a few doors in anger when I had an epiphany, not a sudden one but rather something that had been bubbling up for months unconsciously, triggered by a conversation with a like minded individual. I was at the Open Hardware Camp at Nesta having a few drinks after the event chatting to Omer about the things we were both working on. Coincidently he was about to embark on a fiendishly complicated PHD project around reconfigurable vision systems and had been doing a lot of FPGA research before implementation. As I explained what my problems had been moving OSH format forward around Amino, I managed to coin what I thought represented an ideal vendor offering, the core technology that could enable the transformation we required.
My thinking was based on conversations with many folk over the months but boiled down to what if the basic logical building block was a super fast multithreaded cores that could be assembled in multiples to match the application, rather than the tricky FPGA programmable logic blocks which varied considerably between vendors. This would allow the complex FPGA proprietary toolchain issues to be bypassed and a simple code based model (c/c++ or new even a language) to be developed around existing opensource tools and libraries, I even mentioned parallax’s propeller as an interesting but less than perfect example way forward.
With Omer’s interest in Arduino as well as he’s knowledge and experience of the FPGA issues he quickly got where I was coming from and asked me if I was familiar with XMOS, as he had also been working with the XCore silicon and was even considering arrays of G4 multicore versions on mass like XMOS’s XMP-64 platform as a possible hardware candidate for his PHD thesis. It was at this point I had my epiphany that maybe there was already a way forward that I had forgotten about months ago, I hadn’t looked at the XMOS technology since early 2009 when I was still thinking Arduino compatibility and it had completely escaped me as a candidate to solve the current issues.
The next few days were spent catching up with what XMOS had planned and what their game plan was moving forward. It became clear to me that they could well be a candidate not just for Amino but a range of higher end ideas I had been playing with for sometime. Over the next few weeks and months it became clear that XMOS was not just a candidate but quite possibly the answer to more than one of Folknology’s visions, so I decided to integrate those concepts into a more coherent strategy not just with Amino projects but a number of potential community initiatives that could help push the OSH envelope much further forward perhaps in conjunction with XMOS themselves. In the next few posts I will expand on Folknology’s OSH vision for 2010 and beyond as well as the opportunities offered by technologies like XMOS’s XCore and their opensource toolchain and look forward to your contributions and feedback..