i8

Development plan

Do you want to learn how the ISHIZENO i8 is supposed to work? If so, please read on, as we go through the different steps that will be followed to put together our crazy submodular synthesizer. All this is pure vaporware at this stage, but I’m starting to get a picture of how it could actually be done should we have an infinite amount of time and a handful of helpful advisors at our disposal.

Step #1: Faceplate and Microcontroller

Diagram 0.0.1.0

First, we will put together a basic faceplate and connect 8 of its inputs to an XMOS XS1-A16A-128 multi-core microcontroller. The faceplate won’t look as pretty as this mockup, but it should have the right dimensions, and it should be populated with as many rotary encoders and jack chassis sockets as possible, in order to get a feel the its overall geometry. From there, we will try to acquire some analog signals by plugging the 4 CV outputs of my DSI Pro 2 to 4 of our CV inputs. The analog-to-digital conversion will be handled by the ADC converters of the microcontroller, which means that we will only have 8 CV inputs, with a 12-bit resolution. But this should be plenty enough to learn how to program what will be the most complex component of our architecture.

Step #2: Raspberry Pi 2

Diagram 0.0.1.1

Second, we will connect our Raspberry Pi 2 to the XMOS microcontroller through the USB 2.0 bus. Through this connection, we will try to read our digital CV inputs from the Pi. When that’s working, we will try to modify these signals by using the 64-bit DSP function of our 32-bit microcontroller. This will demonstrate that we can use the XMOS for real-time signal processing, across multiple channels, and that we can get signals sent to the Pi for future visualization on a tablet.

Step #3: Rotary Encoders

Diagram 0.0.1.2

Once our Raspberry Pi 2 is working, we will connect our 16 rotary encoders to it, using some of its 40 GPIOs. From there, we will send digital knob settings from the Pi to the XMOS, and we will use some of these settings to modify our digital CV inputs. We will then try to visualize the impact of such settings on the digital CV inputs sent by the XMOS to the Pi.

Step #4: OLED Displays

Diagram 0.0.1.3

Once our rotary encoders are under control, we will try to connect a couple of OLED displays to the Pi, using its SPI bus. This will allow us to validate the selection of our display, and to get a feel for its actual refresh rate. We will then try to drive 16 displays through the SPI bus, and validate that we can still use them for real-time feedback. If time permits, we will try to implement some basic signal visualization on them. From that point on, we will do most of our tests by using the rotary encoders and the OLED displays, in order to develop a feel for the overall user experience, and validate that our component selection is suited to the task at hand.

Step #5: Digital Submodule

Diagram 0.0.1.4

At this stage, we should have become fairly comfortable with our development and testing environment, and time will have come to develop our first submodule. We will build something really simple using the STM32F405RG DSP, while keeping in mind that the interconnect between the backplane and the submodule should provide support for both digital and analog modules. In order to test our submodule, we will also integrate the CS42428 audio Codec, which means that we will finally be able to produce some sound with our aparatus, by plugging some ports of our DB25 connectors to the 4 inputs of my Avid Pro Tools | S3. We will also spend some time working on our mixing section in order to drive our stereophonic headphones output. This will be done by using 9 LM833-N operational amplifiers and a set of digital potentiometers.

Step #6: Analog Submodule

Diagram 0.0.1.5

Once we have a working digital submodule, we will develop an analog one, after having added our 8 AD5360 digital-to-analog converters. This will allow us to ensure that our submodule interconnect is generic enough, and that we have proper isolation between analog and digital signals. We will use this opportunity to start learning about analog sound synthesis, by putting some oscillators, low-pass filters, envelope filters, and delays on our submodule. All this work will be inspired by existing analog synthesizers that are available under a creative common license. And if things are working, we will ship some prototypes to a few early adopters who might want to develop their own submodules.

Step #7: 16-bit ADC Converters

Diagram 0.0.1.6

At this point, we should be getting much more comfortable with mixed signals, and time will have come to add our two AD7606 analog-to-digital converters. These will give us 16 CV input ports (instead of just 8), with a nice 16-bit resolution (vs. 12-bit).

Step #8: DAC Converter

Diagram 0.0.1.7

The last major piece of silicon to be added will be another AD5360 digital-to-analog converter, which will be used to generate 8 CV outputs. This converter will be directly driven by the XMOS. At this point, we will have a working design, and we will be in a position to turn our schematics into an actual PCB design for which we will order a handful of prototypes. These will be tested extensively, then shipped to our early adopters for further validation.

Step #9: WiFi Hotspot

Diagram 0.0.1.8

While we are waiting for our backplanes, we will use our prototype to integrate our WiFi hotspot and start the software development of our web-based user interface. It will be tested using an iPhone 6, an iPad Mini, and an iPad Air, in order to ensure that we have a fully responsive design. The back-end will be written using io.js, while the front-end will be developed with Angular 2.0. Our first screen will be a simple oscilloscope.

Step #10: Grid

Diagram 0.0.1.9

The next step will be to connect the ISHIZENO g3 grid, which will be developed in parallel. We will test two modes of connection: WiFi and USB, with a clear preference for the former, while the latter will be used if a web-based connection over WiFi introduces too much latency, which should be tested by actual musicians.

Step #11: Faceplate & Module

At this point, we should have a complete system, and we will be in a position to complete the development of our backplane, then work on our faceplate and its PCB. From there, we will finalize the interconnect between the faceplate PCB and the backplane PCB, then wrap-up the mechanical design of our module. Final faceplates will be ordered and populated with all their controls, displays, and ports. A thorough testing procedure will then be developed and applied to another set of prototypes that will be shipped to our early adopters. An additional round of feedback will be gathered from them, which will lead to final hardware modifications. This will give us a final design that will be offered for sale through channels that have yet to be defined.

So, how long will that take? One year? Two years? Three? Forever?

Place your bets!

Advertisements
Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s