Open-source Android app: Real-time adaptive audio EQ for all apps (YouTube, Spotify, etc.) using a 25KB neural net + native DSP

Published 4 hours ago
Source: reddit.com

Hi r/androiddev,

I’m sharing an open-source project I’ve been working on that combines on-device ML, native audio DSP, and Android’s global audio effect API to dynamically optimize sound in real time — for any audio source (Spotify, YouTube, games, calls, etc.).

🎯 What it does

  • Applies adaptive equalization based on a lightweight neural model (Tiny AutoFUS, 25 KB)
  • Works globally — no need to modify individual apps
  • Runs 100% offline on device (no cloud, no internet)
  • Built with Kotlin + JNI + C++ (Biquad filters, FFT, noise-aware gain)

🛠️ Tech stack

  • ML: PyTorch Mobile (Lite) — model loaded from assets
  • Native: CMake + NDK — core DSP in jni/core/ (BiquadEQ, FFTProcessor, NoiseGate)
  • Android APIs: AudioEffect, AudioCaptureService, MODIFY_AUDIO_SETTINGS
  • Architecture: Hybrid — Kotlin for control, C++ for low-latency processing

🔗 GitHub

https://github.com/Kretski/audio-optimizer-android

Includes:

  • Full source (Kotlin + C++)
  • Prebuilt APK (v1.0)
  • MIT license

❓ Why share this?

I’d love feedback on:

  • Best practices for global audio effects (stability across OEMs?)
  • Efficient Tensor ↔ JNI data transfer for real-time inference
  • Ideas for latency reduction (currently ~20–40ms on mid-range devices)

This is part of a larger effort around edge AI for scientific & industrial applications (think drone acoustics, engine diagnostics), but the audio module is general-purpose.

Thanks for taking a look!

submitted by /u/Visible-Cricket-3762
[link] [comments]

Categories

androiddev