I have had a lot of issues with this and am ultimately relying on a system clock and not BASS for this, but I am 99% certain that's not the right way to do it. From what I can tell it's OK for the clock to start over (for example after a seek), it just needs to report it's position as an offset from the last reference time. The next thing that really needs to happen is to know the position in the stream and send that back to DirectShow via a "reference clock". Is this right though? If you need to output data through BASS, that is coming from another source, is this the right way to do it? Then when data arrives I use BASS_StreamPutData to send the data. I have used BASS_StreamCreate with STREAMPROC_PUSH to create a push stream. The audio renderer will be delivered decoded audio samples, its job is to play them and report the time back to the graph for syncing. So, does anyone out there have any idea if this is the right way to go or not? If so, then what parts of BASS are required. There is an article ( ) which suggests that this might not inherit from the correct base class though. DSPack includes DSoundRenderer.pas which is an example DirectSound Renderer. I use Delphi and some of the DSPack translations of DirectShow. It's very important and it's certainly key if you want to change the playback speed (as you would with tempo or samplerate FX - pitch potentially too I guess) This is actually the part that I am having the most problems with. To act as the reference clock to sync the video to. In DirectShow the Audio Renderer will ultimately have 2 functions:ΔΆ. So, I believe that the solution is to create a DirectShow Audio Renderer. I personally only want audio routed through BASS at the moment, I don't need a plugin or anything to display the video, that's fairly standard DirectShow stuff to add a video renderer to a filter graph, it's covered by all the tutorials (although, I believe my ideas would be a good start to creating a plugin if that's what is ultimately required). ![]() Maybe that would be a good starting point for someone much smarter than me to make some progress on this stuff. Through a much hacked around example I am currently get the audio of DirectShow videos routed through BASS, but it would be good to take a step back and put a bunch of concepts on the table and get them validated. I have looked at this on and off many times over the last few years. However, I would like to share some of my experiences with this stuff, and maybe there will be someone out there that can fill in the blanks or provide some direction. I am not much of a DirectShow coder, most of the time I look at this stuff and go "man, that's way too hard". ![]() I know this post is a couple of months old, but I know this requirement has been around for a very long time.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |