Dignity of synthetic instrument creations

When synthesizers were still relatively new and, above all, had no sound memory, every keyboard player created their own sounds. Mainly synthetic instruments that were based on acoustic instruments as role model and some that could be described as new. This resulted in so-called lead sounds, synth basses and percussives. It was no coincidence that terms such as synthbrass and syncussion were used. Their characteristics were also similar to those of acoustic instruments. At least in principle, i.e. they consist of the elements of tone attack, sustain or decay phase and what happens after a key is released. Initially, only a few options were provided for dynamic control, in particular the pitch bend wheel for tone bending and the modulation wheel for vibrato and tremolo. In the 80s, a really serious dynamics tool was added in the form of keyboard velocity, which was particularly welcomed by keyboard players with piano training. Although special sound effects and noise effects have also been created since the early days, but this article is about synthetic instruments.

You first had to learn how the instrument works and what analog sound synthesis is. It didn’t hurt to have some basic knowledge about existing instruments and how their tones are created. That was the case in the 70s and the norm, so to speak. Some manufacturers included so-called patch charts with the instruction manual so that everyone could immediately set up a few sounds. That was a good help when learning to create synthetic instruments. If you wanted to create your own sounds with the parameters, for example for certain songs, you used a blank patch chart and marked the settings on the parameter knobs and switches with a ballpoint pen or felt-tip pen so that you could have the sound exactly the same later. This was then the template and the keyboard player had to set all the knobs, sliders and switches back to the previously noted settings and the desired sound was playable again. Some keyboard players were particularly talented in this discipline and created very unique and sometimes well constructed original results. Some with a specific character of their own or, and this was also almost the norm: sounds, very personal sounds that were so concise and unique that a listener would associate them with a specific keyboard player.

That was the beginning of the so-called signature sounds. Sometimes a song from this time with such a signature sound became famous, and also a synthesizer solo could became a classic. Back then, musicians also exchanged such patch charts with each other, but not all of them did that. Some regarded these self-created sounds as their trademark and did not pass them on to others. This changed when there were presets ROM/RAM memory and the manufacturers themselves brought people on board to equip a new synthesizer with as many sounds as possible. It was the beginning of sound design, first as a side job, then as a full-time profession. Over the years, a lot has happened in the keyboards market, especially with more and more instruments from more and more manufacturers. Other syntheses were also added, such as FM (frequency modulation), additive synthesis, phase distortion and, above all, samplers, which were often a combination of synthesizer and pure sample player.

So far, so good. But today there’s a catch. And that is not only an unmanageable amount of synthetic sounds, but also a lot of completely arbitrary, meaningless or even actually useless tones. Tones that lack everything we know from acoustic instruments: an unmistakable character and that certain something. Even a simple recorder delivers character that is instantly recognizable as soon as you have listened to it for a few seconds. In other words, much of today’s tone material in synthesizers lacks any of the dignity that the recorder has. What happened that allowed this to happen?

On a whim, I recently watched a few YouTube videos in the evening after work and suddenly had the impulse to listen to a Tangerine Dream album again. It was Rubycon, which is my favorite album by them. And while I was listening to it, I also remembered the two live concerts I saw. One at the time of their album Phaedra, the second around Tangram. Both gigs were sensational. The three Moog Modular cabinets let the sounds sweep through the concert hall with a breathtaking force, just like the Mellotron, the Solina String Ensemble, the Korg PE-2000, EMS VCS 3. The PA had power and transmitted the sounds perfectly. A real treat. And the sounds were partners to the music and offered exactly the aesthetics that the musical ambition wanted to portray. In other words, three elements: character, presence, song-oriented. And that’s no longer the case today? Yes, but far too rarely and sometimes not at all.

Youtube: Tangerine Dream: Rubycon, full album, click here

Do you want to leave it like this, this facelessness, the irrelevant paleness, the boring? This doesn’t mean that it necessarily has to come across as massive, as with Tangerine Dream. No, the finely drawn, delicate sound is also needed in a musical context if this certain fragile expression is to be reproduced. Or it has to sound funny, like the Casiotoone Frog preset, which plays an almost striking role in Michael Jackson’s Thriller. Or the famous Preset 11 electric piano of the DX7, which has been an integral part of songwriting for 40 years now with its slenderness and great dynamic possibilities. In other words, everything that is associated with the term Famous Sound. TR808 drums, Prophet 5 Sync sound, the widest brushstroke ever called Jump from the Oberheim OB-Xa. But all this doesn’t have to come across as wide-legged, it can be done differently. The glassy pad of a PPG Wave 2.2, the soft and warm fuzz of the Oberheim Matrix 12 Preset Horn Ensemble, where the singer calls for a pad sound that lays the sonic foundation for his performance. The bizarre, beastly lead sound, where you can hear the punk that it is supposed to represent. The dignity that a synthetic sound can have can be found in all these sounds. After all, such an instrument should be on a par with acoustic instruments. So it may be time to return to that, as described above. To dignify the synthetic instrument creations and to proceed according to the method less is more.

.

Copyright notice:

Sharing/reblogging is expressly desired. Reprinting, even in part, as well as any editing and commercial reuse are not permitted or require written permission from me.

FM Synthesis: Same but different

Although the Yamaha DX7 is often generally equated with FM synthesis, this is not entirely accurate. And also unfair, but more on that later. Although the basic principle is generally the same for every FM engine, it is not identical. This has become apparent over the years. Today, FM synthesis is as ubiquitous in the world of electronic sound generators as analog synthesis. So if you look at different FM synthesizers, you can attribute a different basic character to each one. Sometimes this is not particularly striking, but it is at least subtly audible.


I noticed this particularly clearly a few years ago with the Alesis Fusion. Since then, I’ve been referring to its FM engine as “hot”. What does that mean? Depending on the operator modulation, the sound becomes quite biting, almost coarse, at higher amplitude levels. We already know this from the forefather Yamaha DX7, which can show this sonic face quite well with basses, for example. Other FM engines, on the other hand, are at the other end of this spectrum and seem downright tame in comparison. And there are some whose character sits somewhere in the middle of these two extremes. To illustrate this, I have put together some videos for this blog post. You’ll hear sounds from the Yamaha DX7, Alesis Fusion, Korg Opsix, and the two software FM synthesizers Tracktion f’em and Sugar Bytes Aparillo.


To round it off, there are two more videos, each of which is compared to the DX7. These are the Korg Kronos and its MOD7 FM section and the Yamaha MODX. We already know such comparisons quite well from the Minimoog and its emulations and clones. Now that the text is done, let’s move on to the sounds and the videos. Have fun!


DX7

Fusion

Opsix


f’em

Aparillo

Kronos 2 MOD7 vs DX7II

MODX vs Yamaha DX7

The trivialization of euphony

We are currently living in the seventh decade of electronic musical instruments. Initially, it was the synthesizer that surprised us with its completely new sounds at the time, followed later by samplers and physical models. Synthesizers became increasingly diverse, as the journey went from analog to FM synthesis, phase distortion, wavetables, additive synthesis and other subgroups. While musicians initially had to cope without touch dynamics, knobs and buttons were used instead as a means of dynamization. These were actually intended by the manufacturers for basic creation and provided on the control panels. But as musicians are, they want to express the music. Emotion, drama, fun, aesthetics, atmospheres.

All in the service of the songs, the corresponding lyrics and also in the movie as a suitable background for the scenes. The Filter Cutoff quickly developed into the central color control, because the effect of changing the sound from brilliant to dull is striking and thus the thick brushstroke in the type of musical expression. The so-called controllers such as pitch bend and modulation wheel were also available right at the beginning, so that you could bend the sound like a guitar and play with vibrato like a violin. When velocity sensitivity was added in the 80s, it was a milestone in the construction of electronic keyboard instruments. Pianists were immediately attracted to it, also thanks to the reasonably comfortable polyphony. And when the first samplers came onto the market not much later, pianos were soon digitized. At first they were still very flat with only one dynamic level, usually forte.

But when the velocity switch was invented, it was suddenly possible to play several dynamic levels from pp to ff, even on keyboards with weighted keys. The path to the digital piano was now paved. This digitalization had consequences. Firstly, a lot of musicians turned away from analog synthesizers. Their rediscovery would take a good decade. Samples were the order of the day, across the entire range of acoustic and electric instruments. You could always tell from the 8 and 12 bit sampleplayers that you were dealing with samples. The distance to the sampled models was immediately audible. This was also due to the fact that they were often recorded using very simple means. A dynamic microphone held close to the violin, a few bow strokes later and the thing was in the can. However, some made a special effort to perform the sonic appearance of these actually primitive sounds in such a way that the musical result sounded quite appealing. With volume pedals, wheels and skillful playing techniques typical of the instruments, it was possible to get quite close.

However, you could still hear the difference in sound between the samples and an actual orchestra. This was to change as the sample quality improved, as did the recording techniques used to create them. Expensive recording studios with high-tech microphones and the finest peripheral equipment for refinement made the recordings of orchestral instruments in particular better and better. The know-how of the sound designers who transformed these results into playable presets also made enormous progress in some cases. But there was a catch: Now the musicians had to keep up. And that failed, at least for many of them. Operating systems that were anything but inviting to learn to master certainly played a role in this. Not always, there are virtuosos on the keys who manage to breathe life into these high-quality sounds. Using controllers and appropriate playing skills, where they master the matter with musical ideas and inspiration.

But they are the exception, at least in industrial music. Now that the Golden Age, namely the 80s with its endless number of great songs and bands, is over and the industry is even dictating the assembly line of music and has exploitation chains all the way to supermarkets and petrol stations with their intrusive background sound, things have gradually become lousy for musicians. Why? Well, if you play a fantastically sampled cello on the keyboard and put your hands on the keys lovelessly and cluelessly at the same time, it just sounds pathetically weak. Which you can hear especially in legato passages. Powerless, without any lively emotion, no virtuoso details whatsoever, as any decent cellist can do, whom you have as a role model and somehow imitate. They approach with a great tone, the bowing is perfect and the dynamic range is enormous. A live performance by a cellist is a treat for the listener. The grottily played digital counterpart tends to make the listener fall asleep or run away, there’s no interest in that.

Maybe not for everyone, because there are people with wooden ears who don’t notice. They are numb to the constant bombardment of the music industry anyway and apparently put up with any musical filth that doesn’t even begin to deserve the term music. It’s all about that assembly line production that is made with loops, MIDI chords and all kinds of automated computer stuff including auto tune, copy/paste and quantization like chewing gum that you spit out after enjoying it. It’s thanks to the outstanding sound quality of the samples that you can now clearly hear all this. And turn away from it. As a listener. We also encounter an influx of sounds with which synthesizers and software instruments are stuffed. Hard to beat in terms of irrelevance, lacking any kind of dynamic spectrum. Musicians trample around like clogs through a flower bed.

Some sound designers struggle with so-called signature sounds, but these are no good if they lack any real character of their own, as well as the right musical signature, and are therefore arbitrary. So we are at a moment where we can realize that this is the wrong highway exit for further development and that we have to go back to the further evolution of electronic instruments. The musicians now have to keep up with the technology, which is a very unpleasant situation. And manufacturers should stop developing faceless mass-produced goods where you can no longer tell the difference between them and the competitor’s product. Digital pianos that all sound almost the same? That’s not the best idea. Mountains of synthesizer souds that you won’t remember once you’ve heard them. Electronic instruments should be an inspiration? Yes, absolutely. But first and foremost a suitable tool for making music. But that doesn’t work by simply packing in more and more features.

But with aspiration, dedication and love on the part of the makers. Manufacturers and musicians have to work hand in hand as equals. And the musician must feel all this when he plays the new instrument for the first time. Let it appeal to them to such an extent that they are only on their fifth preset after an hour of trying it out. So the next development phase has begun right now. Here we go.

.

Copyright notice:

Sharing/reblogging is expressly desired. Reprinting, even in part, as well as any editing and commercial reuse are not permitted or require written permission from me.

FM-Synthesis: The Third Gate

For most musicians, FM Synthesis is a blessing because the sounds are simply sensational. It all started with the Yamaha DX7, which was a milestone and today has a cult status like the Minimoog. FM sounds are great for all kinds of genres. They can be incredibly dynamic and often have an enormous presence and, above all, character. On the other hand, FM is almost a curse for some people, because creating your own sounds seems quite complicated to them.

So, what to do? Well, you can simply use the presets and only play with them as they are made. That’s a very practical way of dealing with it. So leave the fingers out of the editing? Sure, why not. After all, the manufacturers of FM synthesizers have usually hired people who understand preset making. You can do quite well with it. Especially when there is a range of controllers such as the modulation wheel, aftertouch, control slider or knobs that can be used to modulate a sound in realtime so that you can use it expressively and express your music. Many musicians get on very well with this and you can say: End of story. But what if you are not satisfied despite the many presets in the instrument? Is there something about every sound that bothers you? What can you do? Perhaps you have at least read the operating instructions in full. But it hasn’t made you much smarter. It just talks about so many different terms that you’re not familiar with and you don’t know exactly what to do with them. So this question still remains: What to do now?

Well, there are basically three successive gates you can go through to conquer the FM Synthesis for yourself. I will introduce you to these three in turn. Once you have read through the following text with these three gates, you can think about what you find symphatic and how far you want to go. But one thing is important: In order to get to the Third Gate, you must first go through the First and the Second. Now, do you want to try this and find your way? No, rather not? Ok, then you can stop reading at this point and just work with the presets as usual and as they are. As I said, that’s also great and you can make your music with it, no question. Or, are you ready to open the Gates and use what you find behind them for yourself? Ok, then read the following text through to the very end. And maybe you’ll get so excited that you’ll want to get to The Third Gate and go through that. Alright, in this cass: Let’s get started.

The First Gate

Behind this Gate lies the field of the simple and also quite fast method type “Strong and beautiful in 5 minutes”. This is particularly suitable for people with only a little time on their hands who want or even need to concentrate on the music. Writing songs or rehearsing them for a cover band can be time-consuming enough. There is little opportunity to study any manuals in between, let alone to familiarize yourself with the complex user interface of an FM synthesizer and be able to operate it quickly. Just to customize a few sounds. So there’s the simple method where you just change things in existing presets so that they fit a song better. In this case, all you have to do is pick out the preset that comes pretty close to the sound you need. Then you look at what simply doesn’t fit. These are usually things like Attack or Release time. Maybe just the duration of the Sustain, because the sound should actually decay by itself a while after the key is hitted. Or the vVelocity is not as good as it should be. Or it is even simpler and only the Reverb or Chorus effect is too intense or too low.

In such cases, you usually have to deal with a manageable number of parameters that are more or less always the same and need to be adjusted. So you first look for these parameters in the operating manual, read the explanatory text and then look where you can find them on the synthesizer panel, in the display or in the software on the computer monitor. As soon as this is localized, you try to remember their places on the user interface. So that you can find them as quickly as possible when you need them. There’s nothing worse than not knowing where they are. You don’t want to waste time on such small sound changes, because that kills creativity. You’ll notice that over time it becomes quicker and quicker and you’ll be pleased with your success. This in turn motivates you to do it again and again when it’s required, because it starts to require no particular effort on your part. In short: The First Gate and that goal achieved.

The Second Gate

This is for someone who has either already gone through The First Gate or wants to understand something properly right away. Someone who is prepared to invest a considerable amount of time to explore a complex system with countless parameters and menu branches. To do this efficiently, you first need to familiarize yourself with the existing conditions. The first step is to try out and play all the presets thoroughly to find out everything you can do with the sounds. It is possible that a synthbass played in the middle register is a wonderful percussive sound that can be used to play grooving chords or single notes. Or it can really pack a punch in the top register and can also be used as a solo sound. Or you realize that a percussive sound that sounds great in all registers only needs a long sustain phase and a gentle attack to deliver a great pad.

It will certainly take you a good while to try it out with all the available presets. But that’s the way it is. After all, you’re probably having a lot of fun just playing and maybe you’ll even come up with some new musical ideas, such as a completely different bassline or a cool chord progression that inspires a new song. Just because you didn’t always want to play the same thing during the preset test. Once you’ve tried out the presets, pick up the manual and read through it if you haven’t already done so. But even if you have already read the manual, just do it again, because you will certainly read some passages with a different perspective now that you know what the author of the manual described there sounds like. You’ll also be able to memorize the terms better. Once you’ve done that, perhaps you’ve marked a few key points in the manual with a pencil for future reference, then it’s time to go to the synthesizer’s Edit page. Or to the user interface of a software. As described in The First Gate, you now look for the locations of the parameters. This should be done thoroughly so that you know where the individual sections such as Envelopes, LFOs, Pitch etc. are located. After all, you want to be able to find everything you want to access as quickly as possible. In contrast to The First Gate, you are now in a position to really create a sound from scratch. The best way to do this is to work systematically and build up the sound in the same way as you would when studying an acoustic instrument.

Like a fFlute, for example. There are always several sound components. You have an attack phase, which is perhaps a kind of air noise. Then comes the sustain phase, when the sound resounds for a longer period of time. And there is a decay or release phase. Sometimes there isn’t, in which case the sound simply stops abruptly after the key is released. Then come the modulations, which means that you can modulate the sound with a vibrato or tremolo, for example, in addition to the volume and overtone dynamics. Either dynamically as required while playing, or automatically by programming a fixed vibrato effect, which only starts to resonate automatically a moment after the key is pressed. These dynamics can be expanded several times so that you can use all of the keyboard’s controllers. Perhaps increase the Volume using Aftertouch, call up the Vibrato using the Modulation Wheel, extend the Decay time using the Damper pPedal and so on. If you do this often now, you’ll find that quickly over time, you’ll know the ins and outs of the instrument more or less inside out and you can easily and quickly create a new sound, completely programmed from scratch. Adjusting an existing one quickly will work in an extremely short time, for example during a recording session or a band rehearsal.

And another thing: If you discover a preset with this knowledge that you really like the way it is made, you can look at the data thanks to the knowledge you have now and use it to recognize how the programmer did it. And add it to your wealth of experience. Because a wealth of experience is fundamentally important in FM. So, that was the way thtough The Second Gate.

The Third Gate

Let’s move on to FM magic. Excuse me, what does FM have to do with magic? Well, in this blog post we are moving from the primitive to the complicated to the simple. And with The Third Gate, we’ve landed on the simple. The core of the FM universe. And that’s where the magic of FM synthesis lies, and it really is simple. Let’s take a look at how the story began.

The inventor John Chowning has said in his interviews that he was actually looking for Vibrato effects because the simple sine waveform sounded too sterile and lifeless for his needs. So he thought, how about simply combining two such sine waves and trying to set them in motion to create the desired Vibrato effect. The computer he was using at the time couldn’t do anything in realtime, you had to wait until the calculation time had elapsed and the audio result was ready to listen to. And he was amazed at what he suddenly heard. By simply changing the Pitch of the second sine waveform, a completely different waveform emerged.

He once said that the moment he found out what these two connected sine waves could do was a magical moment. Ok, so what does that have to do with us? Well, we can experience such a moment too. That’s when the penny drops. I see. And how does such a moment come about? Well, it’s a matter of luck. It happens to you, or it doesn’t. And how do I know? Because I experienced it. It was some years ago and until then I thought that after all those years of studying FM, I had understood the whole thing. Yes, I certainly did. But it was an intellectual understanding. One that goes hand in hand with formulas and math. With knowledge about acoustics and how acoustic instruments are made and how sounds are created. How sound waves work and what role the room in which a sound wave is generated and produced is.

Now, back to Chowning. He once recommended in an interview that, firstly, you are pretty well served with six operators like the DX7. And he added that you can really do an incredible amount with them and get a lot of great sounds out of them. He concluded by saying that you should try working with just two operators and see what you can do with them alone. And use operator feedback to generate potentiating resonance fields. So, you have a key now in your hand and it’s up to you what you do with it. Because you can use it to try and give your luck a little boost. You can also read my blog post FM Synthesis: It’s all about Vibration. And maybe it will happen to you too: that magical moment when you know, yay, that’s how it works? Yes, that’s exactly how FM works. Good luck!

.

.

.

Copyright notice:

Sharing/reblogging is expressly desired. Reprinting, even in part, as well as any editing and commercial reuse are not permitted or require written permission from me.

Alesis Fusion: From problem child to superstar

It was 2005 when Alesis launched a keybaord workstation on the market. It was called Fusion and for good reason. Because under the hood, it really is a powerhouse with four engines. A virtual analog synthesizer, another section capable of FM synthesis, a sample player and physical modeling. And that’s not all, because the developers have added a kind of drum machine to the keyboard, as the built-in arpeggiator can read MIDI files and therefore also drum patterns

A workstation is only a workstation if it has a sequencer and that’s exactly what the Fision offers. So, that’s the feature list in a nutshell. Does it all sound pretty well thought out? Yes, Alesis already had experience in keyboards. The first model was called QuadraSynth and its successor QS. That should be enough to venture into the premium workstation class. Really? Well, it turned out to be more than a challenge. At the time of its market launch, the instrument was available in two versions for less than 2,000 euros, which was sensational. Its look is quite unconventional, the silver aluminum housing is curved to the front and back and is reminiscent of an airplane wing. Let me fly! And the design of the controls was probably inspired by American classic cars of the 50s and 60s. In any case, the look has character, is photogenic and shows its strengths in the everyday operation of the keyboard. The instrument came at a good time for musicians. Workstations had been popular for several years, starting with the Korg M1. Everything in, everything on. And the Fusion cut a good figure straight away with its features. You could only get something like this for a lot more money, such as the Korg Oasys, which came onto the market in the same year and was much more expensive. This earned the Fusion the nickname “Poor Man’s Oasys” among keyboardists, and rightly so.

However, the joy was initially dampened unexpectedly by some very annoying bugs. There weren’t too many of them, but they were pretty highly praised by musicians in the relevant forums. There were error messages that appeared on the display when loading samples and you didn’t know what they meant and, above all, how to fix them or at least work around them. Or master clock problems when synchronizing audio data with midi data in the sequencer. And many other things. That dampened the joy, at least temporarily. Alesis worked feverishly to eliminate all these problems and OS 1.24 put an end to most of them. The remaining desire for fusion was fueled by a flood of additional sounds. Which was logical, because the internal factory voices were criticized here and there as being a little pale. One of the reasons for this was that the great modulation options in the presets had not been used straight away in the way they turned out to be a little later. The modulation matrix on the Fusion is a blessing. Both in terms of the wide range of possibilities and the fact that it is quite uncomplicated to use.

Just like the Sampler Player Engine, which allows up to four layers. Plus lots of effects for polishing. And up to eight LFOs per voice. Separate envelopes that are displayed graphically, making it much easier to get an overview. Operation in general. The menu navigation via the display is exemplary, which certainly pleased the musicians. The same goes for the equipment with over a dozen real-time controllers. The four knobs, for example, are endless encoders, which makes them extremely convenient to use. In addition, their values and positions are shown graphically on the display. The effects are numerous, although perhaps not always quite so great in detail. The reverb effect, for example, has occasionally been criticized, although the plate model in particular is quite good.

The two keyboards are just as good. Once as semi-weighted plastic keys and once as weighted piano type keys. The latter are relatively smooth-running, which still allows pianistic playing. And the plastic key version of the 6HD is nicely staffed and therefore good for accentuated expression. Both Fusion versions, the 6HD and 8HD, are comparatively light, weighing just xx kg and yy kg respectively, which makes the instruments easy to transport.

And there are other pleasant features such as a CF card for additional data. Although the internal hard disk is already very convenient. A novelty in keyboards at the time anyway. This allows any number of sound banks directly on board. So no expensive extra cards required. This is very unusual for keyboards of this type, which often only offer a limited number of internal memory locations. Such a powerful keyboard is therefore ideally equipped.

It didn’t take long after the market launch of the Fusion for owners to get together in forums. And helped each other out with tips and solutions to problems. Not just because of the annoying bugs. Rather because of all the great possibilities of the Fusion. Sounds and samples were distributed, suggestions were made and it also became an advertising platform. For 3rd parties like me. I was one of the people hired by Alesis to program the factory presets long before the Fusion was launched. Since I had already realized during this work that there was much more to it than just the few song banks that were included, but that I could make further offers and inform the users in this forum about it, I simply did it. The forum owner had allowed this self-promotion. The first Program Presets Bank with 364 new sounds was literally snatched out of my hands. And not only presets were in demand, but also sample libraries.

To date, over 30 of them have been produced, by me alone. There are a number of other providers and the Fusion is probably one of the best supported instruments in the workstation sector. There are also tutorials, some via website texts, some in forums, on Youtube as videos. And also as a book “My Fusion Secrets”, which I published and which was available with a sample library plus two sound banks. It will be back soon as a new edition with more sounds. The Fusion is featured on many music productions and has earned itself the status of “keyboardist’s darling”. Despite all the difficulties at the beginning. The developers in particular deserve to be honored today. People who have a Fusion love it and never want to throw it out of the studio or the bedroom.

There are even pimp-my-synth options, such as replacing the internal hard disk with an SSD disk. Or upgrade the memory to up to 128 MB RAM. The used price is still surprisingly low today. So it’s still a good idea to get a used Fusion if you’ve listened to a few online demos and are considering buying one. Today, the Fusion is certainly one of the top digitals of the 2000s that has made it to the Olympus of the most popular.

.

.

.

Copyright notice:

Sharing/reblogging is expressly desired. Reprinting, even in part, as well as any editing and commercial reuse are not permitted or require written permission from me.

Realtime Controller: It’s all about how you do what you do

When Keith Emerson started using the big Moog Modular Synthesizer live on stage at his ELP concerts, Bob Moog was quite surprised. Why? Well, he had designed these large boxes with their countless plug-in panels, knobs and switches primarily for broadcast purposes. They were also intended for use in recording studios. Construction and durability for the stage was not yet intended, and he was afraid that Keith Emerson would have problems with the Moog Modular. Even during transportation, he had doubts as to whether this could be handled properly at rock concerts. At an ELP concert near Bob’s hometown, he took the opportunity to see it in person. He was visibly impressed by Keith’s performance and how he used the Moog Modular. Perhaps he had also listened to audio recordings of the ELP concert at the now legendary Isle of Wight Open Air Festival. You can hear some of the Moog Modular’s operational problems. However, both Keith and the technical crew obviously mastered these problems quite well.

In any case, he remained in close contact with Keith in the following years. And since he was fascinated by the idea of being able to use synthesizers on stage, he developed several prototypes, together with Herb Deutsch, for a small portable keyboard instrument. This was inferior to the Moog Modular in terms of sonic possibilities, but at least it contained the most important components and functions of the Moog Modular – and was compact, portable and easy for musicians to understand and operate.

It was to become the famous Minimoog when he was finished with his concepts. The great advantages of the Minimoog combined with its great sound brought the desired success for everyone. The parameters were all pre-wired or accessible via switches. And the controls were dimensioned in such a way that they were easy to reach even on poorly lit stages and could also be operated while playing. Basically, this was the prototype of a controller keyboard. The only difference was that it also had sound generation.

Why a controller keyboard? Well, they are parameters that are operated with the knobs and switches, but they are the very important parameters of a synthesizer sound. Later, synthesizers became more and more packed with such controls and today they are often displays with a myriad of parameters that you have to operate via menu navigation if you want to make changes to the sounds. However, the abundance of these parameters made it difficult to use them for dynamic sound changes during a performance. And that was precisely the problem. Why was that? Well, the sound alone, such as a finished preset, is only half the battle. Only the intention of playing it dynamically and thus serving the musical idea was largely prevented. How could this situation be improved in favor of the artist?

This is how it worked: in the meantime, some synthesizer manufacturers thought that a handful of special controllers would be a good idea. They were to be assigned important parameters in the desired intensity and range so that they could be called up and used by hand and foot during a performance. The Yamaha DX7 was one of the first synthesizers to offer something like this. It was probably a case of necessity being made into a virtue. Because completely digital access to parameters is not useful as a dynamic tool during a performance. Today, there are keyboard instruments with a more extensive range of controllers, and with master keyboards this is sometimes too much of a good thing.

Now, when using a sound, it’s not the what that matters – it’s the pure sound itself – but the how and what you do while using it to play your music or composition. A need was recognized here and implemented as an example, initially as a Minimoog. Although this was probably not meant to be so specific at the time. Today, the controller arsenal of a keyboard can be fantastically adapted to your own purposes. For example, certain modulations of filter functions can be assigned to one controller, envelope parameters to another, effects and their intensity to another. To use them elegantly and conveniently for your own music during the performance without any fuss. As a dynamic tool to represent the viruosity, to enable interesting sound variations during an improvisation. With MIDI recording, this can also be done in a second step by recording controller data on an extra track after the music has been recorded.

If you are clever, then this happens with a certain standardization. You simply think about which sound category should be controlled in a particularly clever and targeted way. With a lead synth sound, for example, it makes sense to assign this quick and convenient access via controller to Filter Curoff and Resonance. And perhaps Attack Time to a different one, but with a very small range. So as not to have too coarse control steps at the start. Switching a second oscillator to another controller, the volume level of which can then be controlled using the controller. Or switch on the second oscillator using the On/Off button. Effects such as delay and reverb are also well suited to such a practical controller set. You can do the same with pads, or you can think of something else for them. For example, assign the release time to a controller. And add two or three more sounds via the controller to improve the dramaturgy options during the performance. Or control a bass in this way, for example by adding an octave. Switch on an extra attack sound or add it succesively using the controller or call up a fat unison sound. For pianos or percussive instruments such as guitars, you can also put together your own controller set, which you can use during performance. Operation is particularly easy if you standardize this and equip sounds of the same categories with identical controller assignments. Then you don’t have to memorize so many different controller assignments. If you have a synthesizer or a workstation in front of you where the presets are equipped with completely individual controller assignments, you can adjust them to your own needs. Otherwise you would have to memorize the settings of every single sound, but that is impossible.

So it depends on how you can use a sound for yourself. The sound alone is only half the battle. But with a great contriller assignm,ent it becomes the real fun and a signature sound full of character.

.

.

.

Copyright notice:

Sharing/reblogging is expressly desired. Reprinting, even in part, as well as any editing and commercial reuse are not permitted or require written permission from me.

FM Synthesis: It’s all about vibration

It was Dr. John M. Chowning who first discovered FM synthesis in the late 1960s. As a musician and composer, he was already working on his pieces with a computer system at Stanford University in Los Angeles. The sounds of the system were very sterile and he wanted to give them a bit of a boost and tried to do so with vibrato effects.

Although the computer available at the time was cutting-edge, it only offered simple sine waveforms, so Chowning experimented with two sine waves to which he assigned different pitches. The results were always a long time coming, as it took a while to calculate the result. This led to the magical moment when two sine waves connected together produced a completely different waveform, such as rectangles and bell-like inharmonics. So instead of continuing to use vibrato to make sterile sine sounds sound livelier, he shifted his experiments to generating different waveforms by means of linked sine waves and their tuning ratios to each other. The first results were reminiscent of the additive synthesis already known at the time, i.e. overtone spectra, but which could be produced far less expensively.

It quickly became clear that it was possible to create sounds that were reminiscent of conventional instruments, such as trumpets and flutes, instead of the typical synthetic sounds that analog synthesis was and is known for. At the beginning of the 70s, the matter was ready for patenting and not much later Yamaha became a licensee. The rest is history, because nothing has been the same since the Yamaha DX7 synthesizer. And since then, musicians have been trying to create their own sounds with this FM synthesis. Most of them don’t succeed, having previously been told that this FM synthesis is difficult to understand. But that is not true.

FM synthesis is basically simple. And why do so many musicians fail to get to grips with it? Because you can’t learn FM synthesis, you have to experience it for yourself. Preferably in a playful way. But not completely without a plan. And you also need time. There’s a good chance that this will happen at some point, just like with chowning: Suddenly there it is, the magic moment. When the penny drops and you realize: Oops, that’s how it works?

Yes, that’s exactly how it works. However, it helps to have a little knowledge of acoustics, conventional musical instruments and, above all, patience and the drive to want to get to grips with FM synthesis in practice. Oh, so real work? Yes, but if you’re really serious about it and enjoy exploring things and just won’t rest, trusting that it will happen at some point. When will that be the case? They’ll know, guaranteed.

It’s about vibrations. Slow and fast, a combination of both. Carefully tuned and also randomly thrown in. Those that resonate with each other. And some that are brought to life through dynamics. The ear, with its highly sensitive ability to recognize the finest or coarsest movements and differences in temporal expansion, can show itself from its best side.

Chowning once said in an interview that you program an FM synthesizer with your ears. And that’s exactly what it is. You simply listen carefully to what you do with the parameters. Over time, you gain a wealth of experience that helps you to proceed in an increasingly targeted manner. The order in which you proceed is something you decide for yourself. FM is frequency modulation and that literally says it all. You are only dealing with frequencies that you can impose your own will on. You can determine their color, influence the duration of the sound development, whether any changes should take place over time and how dynamic control can influence it at any time.

So, we are talking about frequency modulation with only 2 operators. The operator with the carrier frequency is modulated by the second operator. So there is a relationship between these two, like two people having a dialog with each other. The tuning of the modulator therefore influences the sound of the operator that delivers the carrier frequency. Since the pitch of the modulator is normally set with the parameter Pitch or a similar parameter designation, you have a long list with which you can create almost any waveform. It doesn’t have to be an even value for Pitch, you can also use odd values.

All of this can also be found in nature. Ocean waves are created by deep trenches in the deep sea, plus winds above the water surface, both modulate each other and currents are created. Frequency modulation at its finest.

It doesn’t take long to realize that a large number of different waveforms can be generated with 2 operators alone. This does not include the frequencies of the carrier. Because this can also be tuned, just like the modulator. Given this multitude of possibilities, is it easy to lose track? Indeed, it is. FM synthesis instruments usually have more than just 2 operators, the DX7, for example, has 6 of them. So how do you keep an overview? Firstly, by getting a few empirical values. A ratio of 1:1 simply provides two sine waves, while 1:2 gives you a square wave. And so it goes on until it gets into the inaudible range.

Once you’ve done this a few times, you’ll automatically remember the waveforms that you particularly like because you use them in your music. And others that somehow deliver the opposite, i.e. sound like they’ve been brushed against the grain. If you constantly play any phrase on the keyboard during these pitch experiments, you can immediately check what a currently set waveform is suitable for and which is less so. If you want to do more, try the carrier instead of the modulator and try out all possible ratios in relation to the modulator.

The bottom line is that theory and tangible practice work best together and are therefore easily memorized. A little trick here is to be decisive. No matter how many options there are with this tuning, you simply save a reasonably suitable waveform. This will definitely help at this point for later approaches. This way you create a pool of candidates. You can continue to work on them later when it comes to creating complete instruments.

.

.

.

Copyright notice:

Sharing/reblogging is expressly desired. Reprinting, even in part, as well as any editing and commercial reuse are not permitted or require written permission from me.