The Hip-Hop Beat Maker’s Missing Manual

Recommended reading:

http://www.hiphoprally.com/pages/ebook

Advertisements

Recording MIDI from Maschine Studio directly into Ableton Live

Based on the tutorial by St. Jones.

My initial problems were solved after discovering that the MIDI input to Maschine VST was enabled. Hoping it will help someone!

1) Set MIDI Host group in Maschine Studio VST\

  1. (optional) Rename the group to MIDI Host
  2. (right-click) on the MIDI host group -> Group MIDI Batch Settings -> Sounds to MIDI Notes
  3. Set sound/pad #1 MIDI output note on C3
  4. Assign remaining MIDI Host group MIDI notes to chromatic scale values by transposing all following notes by 1 step more than the previous one e.g. C#, D, F … ,C4
  5. Select all notes and select MIDI output to Host
  6. Set MIDI host group MIDI input to active and channel 1 (no feedback)
  7. Set Group 2 MIDI Input to channel 2. Transpose root note to C3
  8. Set Group 3 MIDI Input to channel 3

2) Ableton

  1. Set external instrument #1 MIDI settings to Maschin
  2. Set external instrument #2 MIDI settings to Maschine
  3. Set track 2 MIDI Output to “Maschine channel 2”
  4. Set track 3 MIDI Output to “Maschine channel 3”

Audio:

  1. Set group #2 audio output to Ext2
  2. Set External instrument #1 audio input to Machine channel #2

“Drums DB tools” – project overview

Motivation:

  1. Production of beats in fast manner (less than 30 minutes per beat) requires database of tested and well known drum samples.
  2. Selection of best sounding drum samples requires browsing many different directories.
  3. Some samples have unacecptable quality to be used in professional productions and need to be removed
  4. Having metadata and feature representation of sounds enable finding similar sounds in terms of style, quality, origin and timbre. It might be useful for remixing and learning about timbres used by respected producers.

Metrics:

– Quality:

  1. Time of preparing a beat consisting of drums, sample/instrument track and baseline with 1:30 of simple arrangement
  2.  Number of samples rated 4 or 5
  3. Percentage of misclassified samples after manual verification

– Progress:

  1. Number of automatically pre-processed and classified samples in different categories
  2. Number of manually processed samples
  3. Number of commits to gitghub
  4. Degree of process automatiion

 

Goals:

-version 0.1.0

  • Script for renaming files and title metadata extraction

– future versions:

  • Meta data schema
  • Manually ranked test set
  • Function to select file based on rank and metadata
  • Offline app for manuaal tagging
  • Python script for feature extraction

Future plans

  • Zbudowanie bazy transkrypcji
  • Ekstrakcja groove’ów i zastosowanie na dowolnej tranksrypcji
  • Selekcja reprezentywnych cech warstwy rytmicznej.
  • System do rozpoznawania kopii beatów.

Deliverables:

  1. Python library for drums processing
  2. Db with preprocessed samples and metadata
  3. App for manual tagging

Plan

  1. Finish script for search, renaming.
  2. Commit to github
  3. Add title meta-data creation
  4. Prepare meta-data schema
  5. Select test set N=3 per category
  6. Manually annote test set

Ableton short track.

1) CTRL ALT B – open browser

2) Select kit

3) Double click new midi truck

4) “B” to change from edit to draw mode

5) Ctrl + 2 – widen grid, Ctrl + 1 – narrow grid

5) Create track

6) Ctrl + ALT + T – new midi track

7) Arm tracks and enable “session record”

8) Click on whatever track “stop” button to stop playing current clip

9) Use master track to play whole scenes (clips in the same line)

10) Insert audio effects

11) Record session into arrangement mode

12) Disarm all tracks

13)

 

Shift + Home – select all arrangement from the point to the beggining

Sound visualization (Coursera)

Hi, my name is Michael.

For the first peer reviewed assignment I decided to choose the topic of sound visualization.

In my presentation I hope to explain the basics of three possible domains used for sound representation and present open source tools with examples of various sound visualisation techniques.

Some of the most popular and useful domains for representation of sound waves are:

  • time domain
  • frequency domain
  • tiime-frequency domain

While recording something with a mobile voice recorder or playing music in player with equalizer you either observed the sound changes in the form of dancing wave or bars of spectrum.

 

The static version of sound representation wave of oscilloscope/osciilograph

The horizontal axis is time, the vertical is amplitude of the signal.

The real-time display of osciloscope recorded.

The frequency of tone changes from 20 Hz to 20 kHz

In order to see the frequency content of the signal one should use spectrum analyzer.

The snapshot of spectrum analyzer taken in some moment of time might look like these.

 

 

Gain staging

WHY?

INPUT:

– 6 dB per channel set in DAW

– refence mixes adjusted to -20 dBFS RMS

– Best resolution of DAW mix faders around neutral position (0 dBFS)

STRATEGIES:

– matching operational range of the plugin by adjusting input signal level e.g. trim

– staying below 0!

– aiming at -6 dBFS peaks (20 dbVU)

OUTPUT:

– Average RMS mix level – 20, -18 dBFS [1]

FREE VST METERS:

– http://www.mzuther.de/en/software/kmeter/ – everything you might need, working fine with maschine studio 2.0 32/64 bit

MORE INFO:

http://www.digido.com/how-to-make-better-recordings-part-2.html

[1] http://therecordingrevolution.com/2013/11/25/do-you-know-how-to-read-your-meters/

[2] http://www.soundonsound.com/sos/sep13/articles/level-headed.htm