Post has attachment

Post has shared content

Post has attachment
To clear up years of personal confusion, I did a "deep dive" into the decibel. What are decibels, and why do they appear in so many different forms? How do they apply to my work with the Web Audio API?

im currently looking to buy (pre-made template) or hire some one to dev. or clone a site like soundtrap.c*m/studio collaborative cloud web based digital audio workstation...reach out so we can connect

Hi, my research group, the WIMMICS team from INRIA/I3S located in the French Riviera, in Sophia-Antipolis, near Nice, has an open position for an engineer on a 36 month research project about building a 2 million song semantic database, + WebAudio client WebApps. Starts January/February 2017. All details here:

All info here:

Project title : WASABI : “Web Audio Semantic Aggregated in the Browser for Indexation”

Keywords: Music indexation, Music Database, Natural Language Processing, WebAudio application development, Semantic Web / Web of data.

Description: We are looking for a Research Engineer with a background in the field of modern WebApps (HTML5/JS/NodeJS/MongoDB) and optionally knowledge in semantic Web technologies (RDF/SPARQL) and WebAudio API to join the Inria/I3S WIMMICS team ( Project Management capabilities is mandatory for this three-year project.

Project abstract: Today, Deezer, Spotify, Pandora or Apple Music add to their listening music service, the artists’ biography, they recommend other songs/albums related to the current song, whose proximity is not clearly defined, etc. Journalists use more and more Web resources for pour preparing their programs. A professor in a master in music production will use tools for explaining to his students the techniques used by producers etc.

These three use –case have something in common: they use knowledge databases, from the most empiric one – a search in Google, to more formalized ones, such LastFM, MusicBrainz, DBPedia and/or audio music analysis tools such as The Echo Nest (bought by Spotify.

The WASABI project main originality consists in using jointly Music Information Retrieval algorithms and the semantic Web, in order to build the largest open music database (2 million songs provided by Deezer and Radio France) + perform a natural language analysis of the song lyrics (topic, people, locations, emotions, structure, etc.) The WebAudio API, a W3C standard for writing professional level music applications, will be used for the development of online client applications that take advantage of this huge database.

The Wimmics team will be in charge of all the “semantic web part”: collecting music metadata from different datasources, of the design the database and its APIs, of the development of WebAudio clients. The song lyric analysis will be done by a Phd student in the team.
The Music Analysis part will be done by our partners.

Profile: Mandatory requirements for applicants:
At least: master in computer engineering,
Experience in Web technologies, good knowledge of JavaScript, REST web services experience in NodeJS and NoSQL database.
Experience in project management,
Self-motivated, goal-oriented and willing to work in an international team;
Pragmatic and customer oriented;
Good level in English.
Not allergic to hearing French :-)

Optional: Good control of scripting tools (bash, Unix/Linux tools), capable of installing Web servers or other open source tools; capable of using VMs.

Duration: 36 months

Salary: depends on profile

Deadline: January 2017


Michel Buffa:
Also or @micbuffa on twitter or on the WebAudio slack channel

Post has attachment

Post has attachment
Last video of my guitar amp sim: first with direct input samples, then I play my guitar in real time, it shows very low latency (< 10ms) (Mac Book Pro + presonus sound card)

Is this community dead?

Post has attachment
Web Audio API: Why Compose When You Can Code?

Meet Web Audio API, a powerful programming interface for controlling audio on the web. Gone are the days when the web browser could rarely play a sound file correctly. With this API, you can now load sound from different sources, apply effects, create visualizations, and do much more.

In this article, Toptal Freelance Software Engineer Joaquín Aldunate shows us how to unleash our inner musician using Web Audio API with the Tone.js framework by giving us a brief overview of what this API has to offer and how it can be used to manipulate audio on the web.

#FrontEnd #JavaScript #ToneJS #WebAudioAPI

Post has attachment
Guitar Amp Simulator, the github repository is here:
The demo is here:
Wait while more posts are being loaded