logo-top.png
box-top.png
[Home]
Examples
User Guide
Reference
Download
Wiki
Blog
sf.net


Support This Project

SourceForge.net Logo

Welcome to the Nsound project page!

Nsound is a C++ framework for audio synthesis. It aims to be as powerful as Csound but with the programming features of C++. Nsound tries to make the process of generating complex and interesting sound as easy for the programmer as possible.

What are the goals of Nsound?

The main goal of Nsound is to develop an Application Programming Interface (API) for sound synthesis with the following characteristics:

    1. Easy to use
    2. Easy to extend
    3. Powerful

What are the basic concepts of Nsound?

In Nsound, all audio data is represented as a floating point number between -1.0 and 1.0.  In this way, it is easy to scale the data to adjust volume by multiplying the data by a number between 0.0 and 1.0, a percentage.  The audio data is only converted to 8-bit, 16-bit or 24-bit when it is written to the disk with the Wavefile class.

Generators produce oscillations of the waveform stored in them.  Envelopes can shape audio data.  A Mixer class can be used to mix various audio data together.

With these tools, Nsound enables the programmer to generate audio, shape the waveform and mix it all together.

Audio Data

As mentioned above, all audio data is represented as a floating point value between -1.0 and 1.0. For example, some sine wave audio data:


images/sine.png

It becomes very easy to manipulate the data.  For example, with the Sine class, we can draw a Gussian curve:

Gussian Envlope
Now, lets use a 6.0 Hz sine wave as an input:

6 Hz Sinewave

Since all audio data is floating point, simply multiplying the two signals together will shape the signal:

signal *= envelop;

signal.plot("Signal multiplied by Envelope");

test

Why did you write Nsound?

I love music, but I can't play any instrument.  A few years ago I discovered Csound and was excited by the idea of creating sound and music with a programming language.  It allowed me to create sound precisely.

The concept of Csound was great, its implementation was not.  It was from my frustration with Csound's syntax that drove me away.  I am a programmer first, an audiophile second.  I wanted to create a more usable language, something more readable.

In the summer of 2000 I started writing code to implement my own language, Nsound (the 'N' is for Nick).  I was a sophomore at Iowa State University and knew nothing about compliers or writing an interpreter.  This posed a great challenge.

Independent of any knowledge on how to write a compiler, I was able to invent some of the things needed for a compiler; a lexer that generated tokens, a parser that requested those tokens, and a symbol table to handle variable declarations.  That was the easy part.  Next I looked at handling mathematical expressions.  I groaned at the realization at how complex a compiler really is.  I put my Nsound project on hold.

That fall I eagerly signed up for an "Introduction to Compilers" class.  It was the most rewarding computer science class I have ever taken.  It was amazing to set through lecture and learn about the formal solutions to all the challenges I faced when trying to write Nsound the summer before.  I learned about grammars, and tools to generate C/C++ parsers from grammar definitions.  I learned to use ANTLR (www.antlr.org).

Okay, so you learned how to write a compiler, why is Nsound written in C++?

Even with a great tool like ANTLR, I would still spend a lot of time writing features for an Nsound language, and I would just be copying the C++ features anyway.  Also, it is much easier to debug a C++ executable than an interpreted language.