Thread
:
Is diablo gonna support A2DP??
View Single Post
Johnx
2008-07-26 , 16:51
Posts: 643 | Thanked: 628 times | Joined on Mar 2007 @ Seattle (or thereabouts)
#
80
@IcoNyx: Ok, you understand the basics, but you're off by a little. In the future, you'd save a lot of time (and make a lot more friends...) if you read up or asked first before ranting about things you don't really understand. First of all, no one wants to implement A2DP at the kernel level. That would just be ugly. Secondly, the DSP is more than just for sound. It's a genuine Digital Signal Processor and can be programmed to do just about anything. The reason it's so often associated with sound on the tablets is that (in a software sense) it's the only part of the system that can directly drive the the built-in speakers and headphones.
Now, because of this, Nokia uses a very non-standard sound setup when compared to any other Linux PDA or desktop. On desktop Linux, the stack for playing sound looks like this:
Music Player -> gstreamer (and/or other intermediate libraries) -> ALSA
Most music players on desktop Linux have some way of choosing which ALSA device they want to output to. ALSA output devices can exist as kernel drivers or they can exist in user-space (such as A2DP). Even if the music player doesn't support choosing outputs, you can still just change which ALSA output is the default and everything will work fine.
On the tablets things are pretty different. Nokia has decided to make the DSP the bottom layer for playing sound on the tablet. Whether you're outputting to the speakers or wired headphones or to a BT headset/headphones via HSP, all sound is mixed on the DSP. So their stack for playing sound with "Media Player" through BT headphones over HSP looks like this:
Media Player (CPU) -> gstreamer (CPU) -> DSP (sound is mixed with other system sounds) -> BT libraries (CPU) -> BT headphones.
Now, please note that Media Player, the gstreamer libraries for passing sound to the DSP (AFAIK), the DSP task for mixing sound, and the DSP task for playing sound are closed source. If you want to change where Media Player is outputting sound, you would need to modify its gstreamer pipeline and for that you would likely need it's source (unless their is some new hidden config option for it that no one knows about). If you want all system sounds to be played through A2DP you would need to modify the sound mixing DSP task I think. That's what Nokia wants to do, but apparently it's pretty low priority to them or something. See
here
for their side of the story. Now, of course you could reverse engineer any or all of that stack and reimplement it from scratch, but that would be a huge wasteful duplication of effort, and I don't think anyone's really interested in doing that.
Please tell me if you need me to clear any of this up. It is pretty complicated, and as long as I'm writing this I'd like this to be the one central point that people can be directed to when they ask "What's the A2DP situation?"
@Everyone else: If I made a mistake, or was unclear or ambiguous, please, please,
please
tell me! I don't like to spread misinformation.
-John
Quote & Reply
|
The Following 3 Users Say Thank You to Johnx For This Useful Post:
child
,
imanartist
,
sachin007
Johnx
View Public Profile
Send a private message to Johnx
Find all posts by Johnx