Around 2018, we were a group of coders, tech geeks, audio engineers, and musicians being passionate about making new sounds with technologies. We also enjoyed programming plugins from scratch and building various audio effects. At that time, a new trend grabbed our attention: Artificial Intelligence (AI) as a thriving technology were refreshing the state-of-the-arts over and over again. From natural language processing (NLP), computer vision (CV) to digital signal processing (DSP), AI has achieved astonishing results and solved many problems that were considered impossible to overcome before. We started thinking: is there a possibility to employ AI on modeling audio effects? The core concept of AI is to let algorithms learn to find patterns from data automatically, thus avoiding laboursome and error-prone circuit analysis. Several successful research works can be taken as references: Neural DSP and GuitarML utilized neural networks to emulate guitar amplifiers; Google Magenta Team used Differentiable Digital Signal Processing (DDSP) to synthesize instruments and ambient reverberation; Izotope and several companies further extended the concept of DDSP to traditional filter design, including differentiable IIR and artificial reverberation. All of these are so inspiring and give us a direction to follow. So, what are we waiting for? Let’s make it possible! Among all effects, saturators were our first target. When an analog gear is being overdriven, the output signals will be distorted, or “colored” – harmonics are added so the music sounds “warmer”. Saturator is not like compressor, reverb, or delay – all of them possess a long and complex temporal dependency – therefore making it somewhat easier to simulate. However, we still encountered some challenges. First, the degree of saturation depends on the input level. The higher the input volume is, the more coloring the signals will be. In the British Kolorizer, such dynamics can be found. However, AI is not magic, without proper architecture and dataset design, it cannot learn such subtle variations. Second, the nonlinearity function usually suffers some DSP issues, for example, DC bias and aliasing. Third, AI models are commonly cumbersome, to deploy them on consumer-grade computers needs lots of optimization. Especially to achieve the quality of mastering grade, the working sampling rate of the British Kolorizer is up to 96kHz. Without appropriate acceleration and modification, it will not be feasible. After all-out efforts, we finally conquered these problems and the British Kolorizer was born. We summed up the experience from making the British Kolorizer and gave it a name – Ariosa Technology. In the near future, we plan to write a blog to unveil part of the information. If you are a DSP enthusiast, you don’t want to miss it. We will keep exploring various possibilities, either in the diversity of audio effects or in realizing the vision of automatic mixing/mastering. Last but not least, whatever path is chosen, we will keep offering products of mastering-grade quality.