转:OMX Codec详细解析

分类: nuplayer |
OMX Codec是stagefrightplayer中负责解码的模块。
由于遵循openmax接口规范,因此结构稍微有点负责,这里就依照awesomeplayer中的调用顺序来介绍。
主要分如下几步:
1 mClient->connect
2 InitAudioDecoder & InitVideoDecoder
3 消息通信机制模型的介绍
4 解码过程介绍
先看下类图
http://images0.cnblogs.com/blog2015/630771/201508/190932249108886.pngCodec详细解析" />
这里OMX Codec是以service的方式提供服务的。Awesomeplayer中通过mOmx(IOMX) 作为客户端通过binder方式与OMX 通信完成解码的工作
下面一句具体代码分析
1 mClient->connect
在awesomeplayer的构造函数中调用,具体代码如下
- AwesomePlayer::AwesomePlayer()
- {
-
****** -
CHECK_EQ(mClient.connect(), (status_t)OK); -
****** - }
看下具体实现
- status_t
OMXClient::connect() { -
sp<<span class="tag-name">IServiceManager> sm = defaultServiceManager(); -
sp<<span class="tag-name">IBinder> binder = sm->getService(String16("media.player")); -
sp<<span class="tag-name">IMediaPlayerService> service = interface_cast<<span class="tag-name">IMediaPlayerService>(binder); -
-
CHECK(service.get() != NULL); -
-
mOMX = service->getOMX(); -
CHECK(mOMX.get() != NULL); -
-
if (!mOMX->livesLocally(NULL , getpid())) { -
ALOGI("Using client-side OMX mux."); -
mOMX = new MuxOMX(mOMX); -
} -
-
return OK; - }
这里主要就是通过binder机制与mediaplayerservice通信来完成,具体实现看mediaplayerservice
- sp<<span class="tag-name">IOMX>
MediaPlayerService::getOMX() { -
Mutex::Autolock autoLock(mLock); -
if (mOMX.get() == NULL) { -
mOMX = new OMX; -
} -
return mOMX; - }
主要就是构造一个OMX对象返回给上层保存在mClient的IOMX对象mOmx中
看下构造函数都做了啥
- OMX::OMX()
-
: mMaster(new OMXMaster), -
mNodeCounter(0) { - }
在构造函数中又调用了OMXMaster的构造函数,代码如下
- OMXMaster::OMXMaster()
-
: mVendorLibHandle(NULL) { -
addVendorPlugin(); -
addPlugin(new SoftOMXPlugin); - }
这里OMXMaster可以看成是解码器的入口,通过makeComponentInstance建立解码器的实例,之后就可以进行解码操作了。
这里我们以软件解码器插件为例来看整个流程,主要是addPlugin(new
SoftOMXPlugin);
先看SoftOMXPlugin构造函数
- SoftOMXPlugin::SoftOMXPlugin()
{ - }
是空的~~
再看下addPlugin代码
- void
OMXMaster::addPlugin(OMXPluginBase *plugin) { -
Mutex::Autolock autoLock(mLock); -
-
mPlugins.push_back(plugin); -
-
OMX_U32 index = 0; -
-
char name[128]; -
OMX_ERRORTYPE err; -
while ((err = plugin->enumerateComponents( -
name, sizeof(name), index++)) == OMX_ErrorNone) { -
String8 name8(name); -
-
if (mPluginByComponentName.indexOfKey(name8) >= 0) { -
ALOGE("A component of name '%s' already exists, ignoring this one.", -
name8.string()); -
-
continue; -
} -
-
mPluginByComponentName.add(name8, plugin); -
} -
-
if (err != OMX_ErrorNoMore) { -
ALOGE("OMX plugin failed w/ error 0xx after registering %d " -
"components", err, mPluginByComponentName.size()); -
} - }
这里传入的plugin参数时上面SoftOMXPlugin 构造函数产生的实例
从代码可以看出主要是将enumerateComponents枚举出来的各种解码器存放在成员变量mPluginByComponentName中,类型为
看下enumerateComponents实现
- static
const struct { -
const char *mName; -
const char *mLibNameSuffix; -
const char *mRole; -
- }
kComponents[] = { -
{ "OMX.google.aac.decoder", "aacdec", "audio_decoder.aac" }, -
{ "OMX.google.aac.encoder", "aacenc", "audio_encoder.aac" }, -
{ "OMX.google.amrnb.decoder", "amrdec", "audio_decoder.amrnb" }, -
{ "OMX.google.amrnb.encoder", "amrnbenc", "audio_encoder.amrnb" }, -
{ "OMX.google.amrwb.decoder", "amrdec", "audio_decoder.amrwb" }, -
{ "OMX.google.amrwb.encoder", "amrwbenc", "audio_encoder.amrwb" }, -
{ "OMX.google.h264.decoder", "h264dec", "video_decoder.avc" }, -
{ "OMX.google.h264.encoder", "h264enc", "video_encoder.avc" }, -
{ "OMX.google.g711.alaw.decoder", "g711dec", "audio_decoder.g711alaw" }, -
{ "OMX.google.g711.mlaw.decoder", "g711dec", "audio_decoder.g711mlaw" }, -
{ "OMX.google.h263.decoder", "mpeg4dec", "video_decoder.h263" }, -
{ "OMX.google.h263.encoder", "mpeg4enc", "video_encoder.h263" }, -
{ "OMX.google.mpeg4.decoder", "mpeg4dec", "video_decoder.mpeg4" }, -
{ "OMX.google.mpeg4.encoder", "mpeg4enc", "video_encoder.mpeg4" }, -
{ "OMX.google.mp3.decoder", "mp3dec", "audio_decoder.mp3" }, -
{ "OMX.google.vorbis.decoder", "vorbisdec", "audio_decoder.vorbis" }, -
{ "OMX.google.vpx.decoder", "vpxdec", "video_decoder.vpx" }, -
{ "OMX.google.raw.decoder", "rawdec", "audio_decoder.raw" }, -
{ "OMX.google.flac.encoder", "flacenc", "audio_encoder.flac" }, - };
- OMX_ERRORTYPE
SoftOMXPlugin::enumerateComponents( -
OMX_STRING name, -
size_t size, -
OMX_U32 index) { -
if (index >= kNumComponents) { -
return OMX_ErrorNoMore; -
} -
-
strcpy(name, kComponents[index].mName); -
-
return OMX_ErrorNone; - }
这里enumerateComponents主要就是将数组kComponents中的以为index下标的plugin传递出来
这里只是将插件名字返回最终存储在mPluginByComponentName列表中
后面还会通过makeComponentInstance产生实际的解码器实例,后面再详细看
至此mClient->connect()就结束了。
这里的主要工作就是通过getOMX()在mediaplayerservice端构造一个OMX实例,并返回给mClient的IOMX成员mOmx中
而且在OMX的构造函数中调用OMXMaster的构造函数,可以通过makeComponentInstance 建立实际的解码器实例。
2 InitAudioDecoder & InitVideoDecoder
awesomeplayer构造函数结束后,在setDataSource之后会调用prepare方法,其实现中会调用initAudioDecoder和initVideoDecoder
由于在setDataSource中已经拿到了对应的解码器信息,因此此处initAudioDecoder 便可以构造实际的解码器了。以audio为例
- status_t
AwesomePlayer::initAudioDecoder() { -
mAudioSource = OMXCodec::Create( -
-
mClient.interface(), mAudioTrack->getFormat(), -
false, // createEncoder -
-
mAudioTrack);} -
-
status_t err = mAudioSource->start(); -
-
return mAudioSource != NULL ? OK : UNKNOWN_ERROR; -
- }
这里只列出的主要操作,下面依次来看OMXCodec::Create和mAudioSource->start的主要工作
代码比较多,我们这里主要将重要的代码列出,无关代码省略
- sp<<span class="tag-name">MediaSource>
OMXCodec::Create(*) - {
-
-
findMatchingCodecs( -
mime, createEncoder, matchComponentName, flags, &matchingCodecs); -
-
sp<<span class="tag-name">OMXCodecObserver> observer = new OMXCodecObserver; -
IOMX::node_id node = 0; -
-
status_t err = omx->allocateNode(componentName, observer, &node); -
-
sp<<span class="tag-name">OMXCodec> codec = new OMXCodec( -
omx, node, quirks, flags, -
createEncoder, mime, componentName, -
source, nativeWindow); -
-
observer->setCodec(codec); -
-
err = codec->configureCodec(meta); -
- }
下面依次来看每个过程
2.1 findMatchingCodecs
先看下代码
- void
OMXCodec::findMatchingCodecs( -
const char *mime, -
bool createEncoder, const char *matchComponentName, -
uint32_t flags, -
Vector<<span class="tag-name">CodecNameAndQuirks> *matchingCodecs) { -
matchingCodecs->clear(); -
-
const MediaCodecList *list = MediaCodecList::getInstance(); -
if (list == NULL) { -
return; -
} -
-
size_t index = 0; -
for (;;) { -
ssize_t matchIndex = -
list->findCodecByType(mime, createEncoder, index); -
-
if (matchIndex < 0) { -
break; -
} -
-
index = matchIndex + 1; -
-
const char *componentName = list->getCodecName(matchIndex); -
-
// If a specific codec is requested, skip the non-matching ones. -
if (matchComponentName && strcmp(componentName, matchComponentName)) { -
continue; -
} -
-
// When requesting software-only codecs, only push software codecs -
// When requesting hardware-only codecs, only push hardware codecs -
// When there is request neither for software-only nor for -
// hardware-only codecs, push all codecs -
if (((flags & kSoftwareCodecsOnly) && IsSoftwareCodec(componentName)) || -
((flags & kHardwareCodecsOnly) && !IsSoftwareCodec(componentName)) || -
(!(flags & (kSoftwareCodecsOnly | kHardwareCodecsOnly)))) { -
-
ssize_t index = matchingCodecs->add(); -
CodecNameAndQuirks *entry = &matchingCodecs->editItemAt(index); -
entry->mName = String8(componentName); -
entry->mQuirks = getComponentQuirks(list, matchIndex); -
-
ALOGV("matching '%s' quirks 0xx", -
entry->mName.string(), entry->mQuirks); -
} -
} -
-
if (flags & kPreferSoftwareCodecs) { -
matchingCodecs->sort(CompareSoftwareCodecsFir st); -
} - }
从代码可以看到主要就是从MediaCodecList找到与传入的matchComponentName对应的解码器
MediaCodecList 的实现不看了,感兴趣的看下,主要就是从/etc/media_codecs.xml解析出支持的解码器并匹配出对应的解码器
举例:
这里需要注意的是在前面我们看到 kComponents 数组定义了支持的解码器,这里/etc/media_codecs.xml 也列出了对应的解码器,这里名字要对应上
这里找到符合条件的解码器便通过matchingCodecs->add()添加一个项,并将各个成员赋值,主要是name
最终符合条件的插件便都放在了matchingCodecs列表中
2.2 allocateNode
这里主要有如下重要代码
observer的作用主要用于消息传递。
- status_t
OMX::allocateNode( -
const char *name, const sp<<span class="tag-name">IOMXObserver> &observer, node_id *node) { -
Mutex::Autolock autoLock(mLock); -
-
*node = 0; -
-
OMXNodeInstance *instance = new OMXNodeInstance(this, observer); -
-
OMX_COMPONENTTYPE *handle; -
OMX_ERRORTYPE err = mMaster->makeComponentInstance( -
name, &OMXNodeInstance::kCallbacks, -
instance, &handle); -
-
if (err != OMX_ErrorNone) { -
ALOGV("FAILED to allocate omx component '%s'", name); -
-
instance->onGetHandleFailed(); -
-
return UNKNOWN_ERROR; -
} -
-
*node = makeNodeID(instance); -
mDispatchers.add(*node, new CallbackDispatcher(instance)); -
-
instance->setHandle(*node, handle); -
-
mLiveNodes.add(observer->asBinder(), instance); -
observer->asBinder()->linkToDeath(this); -
-
return OK; - }
将node_id与实际的解码器handle保存在instance中,最终instance会保存在OMX的mLiveNodes列表中
这样OMXCodec就可以通过OMXNodeInstance与解码器通信了,具体参考下面通信模型。
后面会介绍通信过程。这里重点讲解一下与解码器的操作
上面代码中通过mMaster->makeComponentInstance创建了解码器的实例,这里我们以android自带的mp3 解码器为例来讲解
通过上面介绍mp3解码器对应的项为(/etc/media_codecs.xml):
而findMatchingCodecs 传入的字符串为: 以此为依据进行匹配
这里找到对应的解码器后,解码器的名字为:OMX.google.mp3.decoder
这样便可以通过查表(数组kComponents )得到实际的解码器了
实际的mp3解码器代码文件为:framework/av/media/libstagefright/codecs/mp3dec/SoftMP3.cpp
调用的方法为:mMaster->makeComponentInstance 实际代码是
- OMX_ERRORTYPE
OMXMaster::makeComponentInstance( -
const char *name, -
const OMX_CALLBACKTYPE *callbacks, -
OMX_PTR appData, -
OMX_COMPONENTTYPE **component) { -
Mutex::Autolock autoLock(mLock); -
-
*component = NULL; -
-
ssize_t index = mPluginByComponentName.indexOfKey(String8(name)); -
-
if (index < 0) { -
return OMX_ErrorInvalidComponentNam e; -
} -
-
OMXPluginBase *plugin = mPluginByComponentName.valueAt(index); -
OMX_ERRORTYPE err = -
plugin->makeComponentInstance(name, callbacks, appData, component); -
-
if (err != OMX_ErrorNone) { -
return err; -
} -
-
mPluginByInstance.add(*component, plugin); -
-
return err; - }
主要是调用插件的makeComponentInstance方法
这里插件是通过OMXMaster构造函数addPlugin(new SoftOMXPlugin);加载的插件,因此这里makeComponentInstance 是SoftOMXPlugin 的方法
看下具体实现
- OMX_ERRORTYPE
SoftOMXPlugin::makeComponentInstance( -
const char *name, -
const OMX_CALLBACKTYPE *callbacks, -
OMX_PTR appData, -
OMX_COMPONENTTYPE **component) { -
ALOGV("makeComponentInstance '%s'", name); -
-
for (size_t i = 0; i < kNumComponents; ++i) { -
if (strcmp(name, kComponents[i].mName)) { -
continue; -
} -
-
AString libName = "libstagefright_soft_"; -
libName.append(kComponents[i].mLibNameSuffix); -
libName.append(".so"); -
-
void *libHandle = dlopen(libName.c_str(), RTLD_NOW); -
-
if (libHandle == NULL) { -
ALOGE("unable to dlopen %s", libName.c_str()); -
-
return OMX_ErrorComponentNotFound; -
} -
-
typedef SoftOMXComponent *(*CreateSoftOMXComponentFu nc)( -
const char *, const OMX_CALLBACKTYPE *, -
OMX_PTR, OMX_COMPONENTTYPE **); -
-
CreateSoftOMXComponentFu nc createSoftOMXComponent = -
(CreateSoftOMXComponentFu nc)dlsym( -
libHandle, -
"_Z22createSoftOMXComponen tPKcPK16OMX_CALLBACKTYPE" -
"PvPP17OMX_COMPONENTTYPE"); -
-
if (createSoftOMXComponent == NULL) { -
dlclose(libHandle); -
libHandle = NULL; -
-
return OMX_ErrorComponentNotFound; -
} -
-
sp<<span class="tag-name">SoftOMXComponent> codec = -
(*createSoftOMXComponent)(name, callbacks, appData, component); -
-
if (codec == NULL) { -
dlclose(libHandle); -
libHandle = NULL; -
-
return OMX_ErrorInsufficientResourc es; -
} -
-
OMX_ERRORTYPE err = codec->initCheck(); -
if (err != OMX_ErrorNone) { -
dlclose(libHandle); -
libHandle = NULL; -
-
return err; -
} -
-
codec->incStrong(this); -
codec->setLibHandle(libHandle); -
-
return OMX_ErrorNone; -
} -
-
return OMX_ErrorInvalidComponentNam e; - }
这里主要是通过枚举kComponents找到对应的解码器记录
{ "OMX.google.mp3.decoder", "mp3dec", "audio_decoder.mp3" },
这里可以看到每个库都是以.so的方式提供的,命名符合如下规则:libstagefright_soft_mp3dec.so
通过dlopen加载后通过dlsym找到createSoftOMXComponent方法并执行,这里每个解码器都应该实现此函数
这里看下mp3的具体实现
- android::SoftOMXComponent
*createSoftOMXComponent( -
const char *name, const OMX_CALLBACKTYPE *callbacks, -
OMX_PTR appData, OMX_COMPONENTTYPE **component) { -
return new android::SoftMP3(name, callbacks, appData, component); - }
主要工作就是构造了SoftMP3的类对象并返回,这里注意并没有将解码器句柄返回给上层,而是在构造函数中将这种联系放在给定的OMX_COMPONENTTYPE **component参数中
看下构造函数
- SoftMP3::SoftMP3(
-
const char *name, -
const OMX_CALLBACKTYPE *callbacks, -
OMX_PTR appData, -
OMX_COMPONENTTYPE **component) -
: SimpleSoftOMXComponent(name, callbacks, appData, component), -
mConfig(new tPVMP3DecoderExternal), -
mDecoderBuf(NULL), -
mAnchorTimeUs(0), -
mNumFramesOutput(0), -
mNumChannels(2), -
mSamplingRate(44100), -
mSignalledError(false), -
mOutputPortSettingsChang e(NONE) { -
initPorts(); -
initDecoder(); - }
这里就是基本的初始化操作,这里SoftMP3是继承自SimpleSoftOMXComponent,因此会调用其构造函数,如下
- SimpleSoftOMXComponent::SimpleSoftOMXComponent(
-
const char *name, -
const OMX_CALLBACKTYPE *callbacks, -
OMX_PTR appData, -
OMX_COMPONENTTYPE **component) -
: SoftOMXComponent(name, callbacks, appData, component), -
mLooper(new ALooper), -
mHandler(new AHandlerReflector<<span class="tag-name">SimpleSoftOMXComponent>(this)), -
mState(OMX_StateLoaded), -
mTargetState(OMX_StateLoaded) { -
mLooper->setName(name); -
mLooper->registerHandler(mHandler); -
-
mLooper->start( -
false, // runOnCallingThread -
false, // canCallJava -
ANDROID_PRIORITY_FOREGROUND); - }
这里主要是构造了Alooper
关于alooper的工作原理,后面会有一篇专门的文章介绍。
- SoftOMXComponent::SoftOMXComponent(
-
const char *name, -
const OMX_CALLBACKTYPE *callbacks, -
OMX_PTR appData, -
OMX_COMPONENTTYPE **component) -
: mName(name), -
mCallbacks(callbacks), -
mComponent(new OMX_COMPONENTTYPE), -
mLibHandle(NULL) { -
mComponent->nSize = sizeof(*mComponent); -
mComponent->pComponentPrivate = this; -
mComponent->SetParameter = SetParameterWrapper; -
********* -
mComponent->UseEGLImage = NULL; -
mComponent->ComponentRoleEnum = NULL; -
*component = mComponent; -
- }
这里才是构造component的地方,并初始化了其中的方法,如
- OMX_ERRORTYPE
SoftOMXComponent::SetParameterWrapper( -
OMX_HANDLETYPE component, -
OMX_INDEXTYPE index, -
OMX_PTR params) { -
SoftOMXComponent *me = -
(SoftOMXComponent *) -
((OMX_COMPONENTTYPE *)component)->pComponentPrivate; -
-
return me->setParameter(index, params); - }
这里初始化了mComponent 的SetParameter 方法为SoftOMXComponent::SetParameterWrapper而从构造函数知道
mComponent->pComponentPrivate = this;
因此实际调用的是this->SetParameter 也就是其子类的实现
(这里很重要,请注意理解透彻)
通过上面分析可以知道,android已经为解码器的消息传递通过两个父类及SoftOMXComponent和SimpleSoftOMXComponent完成了
后面解码器只要从SimpleSoftOMXComponent 继承并实现对应的消息处理就可以了
在mp3的构造函数中还有如下语句
这里需要注意的是在SoftMP3的代码里便可以调用实际解码器的init decoder等操作了,而SoftMP3可以认为是实际解码器的封装
具体调用顺序会在后面消息处理阶段介绍。
到这里allocateNode 就介绍完了:主要工作就是建立与解码器的联系 observer nodeid,以及找到实际的解码器并初始化
2.3OMXCodec构造函数
后面的执行语句如下:
sp codec = new
OMXCodec(
具体实现如下
- OMXCodec::OMXCodec(
-
const sp<<span class="tag-name">IOMX> &omx, IOMX::node_id node, -
uint32_t quirks, uint32_t flags, -
bool isEncoder, -
const char *mime, -
const char *componentName, -
const sp<<span class="tag-name">MediaSource> &source, -
const sp<<span class="tag-name">ANativeWindow> &nativeWindow) -
: mOMX(omx), -
mOMXLivesLocally(omx->livesLocally(node, getpid())), -
mNode(node), -
mQuirks(quirks), -
mFlags(flags), -
mIsEncoder(isEncoder), -
mIsVideo(!strncasecmp("video/", mime, 6)), -
mMIME(strdup(mime)), -
mComponentName(strdup(componentName)), -
mSource(source), -
mCodecSpecificDataIndex(0), -
mState(LOADED), -
mInitialBufferSubmit(true), -
mSignalledEOS(false), -
mNoMoreOutputData(false), -
mOutputPortSettingsHaveC hanged(false), -
mSeekTimeUs(-1), -
mSeekMode(ReadOptions::SEEK_CLOSEST_SYNC), -
mTargetTimeUs(-1), -
mOutputPortSettingsChang edPending(false), -
mSkipCutBuffer(NULL), -
mLeftOverBuffer(NULL), -
mPaused(false), -
mNativeWindow( -
(!strncmp(componentName, "OMX.google.", 11) -
|| !strcmp(componentName, "OMX.Nvidia.mpeg2v.decode")) -
? NULL : nativeWindow) { -
mPortStatus[kPortIndexInput] = ENABLED; -
mPortStatus[kPortIndexOutput] = ENABLED; -
-
setComponentRole(); - }
这里主要就是将之前的所有工作,都保存在OMXCodec实例中,之后awesomeplayer便直接操作OMXCodec(mAudioSource)了
这里
这里还要注意的是OMXCodec继承自MediaSource&MediaBufferObserver
因此才可以作为输出模块的数据源
2.4 配置解码器
这两部分在我们后面介绍完消息机制之后,读者可自行回来分析
上面介绍了create的操作
下面介绍mAudioSource->start 的操作
- status_t
OMXCodec::start(MetaData *meta) { -
-
mSource->start(params.get()); -
-
return init(); -
- }
只列出了重要代码,其中mSource->start是指启动解码器的数据源MediaSource,这里也就是extractor中通过getTrack拿到的mediaSource。比较简单不说了
看下init实现
- status_t
OMXCodec::init() { -
err = allocateBuffers(); -
return mState == ERROR ? UNKNOWN_ERROR : OK; - }
主要工作是通过allocateBuffers申请内存
- status_t
OMXCodec::allocateBuffers() { -
status_t err = allocateBuffersOnPort(kPortIndexInput); -
-
if (err != OK) { -
return err; -
} -
-
return allocateBuffersOnPort(kPortIndexOutput); - }
这里分别申请输入和输出的buffer
这里分段来看allocateBuffersOnPort函数
- status_t
OMXCodec::allocateBuffersOnPort(OMX_U32 portIndex) { -
-
OMX_PARAM_PORTDEFINITIONTYPE def; -
InitOMXParams(&def); -
def.nPortIndex = portIndex; -
-
err = mOMX->getParameter( -
mNode, OMX_IndexParamPortDefinition , &def, sizeof(def)); -
-
size_t totalSize = def.nBufferCountActual * def.nBufferSize; -
mDealer[portIndex] = new MemoryDealer(totalSize, "OMXCodec");
这里开头先通过命令OMX_IndexParamPortDefinition
然后构造MemoryDealer实例,存放buffer数量及大小信息
这里命令的传输过程请参考消息通讯机制模型的介绍,看完再回来理解这部分
以mp3为例,在SoftMP3的构造函数中会调用initPorts来初始化OMX_PARAM_PORTDEFINITIONTYPE对象
里面会确定buffer的大小:包括有几个buffer,每个buffer的容量等
这里OMX_IndexParamPortDefinition
- for
(OMX_U32 i = 0; i < def.nBufferCountActual; ++i) { -
sp<<span class="tag-name">IMemory> mem = mDealer[portIndex]->allocate(def.nBufferSize); -
CHECK(mem.get() != NULL); -
-
BufferInfo info; -
info.mData = NULL; -
info.mSize = def.nBufferSize; -
err = mOMX->useBuffer(mNode, portIndex, mem, &buffer); -
-
if (mem != NULL) { -
info.mData = mem->pointer(); -
} -
-
info.mBuffer = buffer; -
info.mStatus = OWNED_BY_US; -
info.mMem = mem; -
info.mMediaBuffer = NULL; -
-
mPortBuffers[portIndex].push(info); -
-
CODEC_LOGV("allocated buffer %p on %s port", buffer, -
portIndex == kPortIndexInput ? "input" : "output"); - }
下面是一个循环(忽略了secure等无关代码)
主要是申请内存,并为每个内存新建一个BufferInfo变量,最终都放在mPortBuffers[index]对应的栈中
至此InitAudioDecoder 便执行完毕了,主要做了两件事:建立实际的解码器+申请buffer
3 消息通信机制模型的介绍
当与解码器的联系建立之后,后面的工作主要就是传递消息由解码器处理将处理结果返回给调用者
但前面的介绍对消息模型并不清晰,这里专门介绍一下
下面就以
具体代码如下:
- void
OMXCodec::setComponentRole() { -
setComponentRole(mOMX, mNode, mIsEncoder, mMIME); - }
- //
static - void
OMXCodec::setComponentRole( -
const sp<<span class="tag-name">IOMX> &omx, IOMX::node_id node, bool isEncoder, -
const char *mime) { -
struct MimeToRole { -
const char *mime; -
const char *decoderRole; -
const char *encoderRole; -
}; -
-
static const MimeToRole kMimeToRole[] = { -
{ MEDIA_MIMETYPE_AUDIO_MPEG, -
"audio_decoder.mp3", "audio_encoder.mp3" }, -
{ MEDIA_MIMETYPE_AUDIO_MPEG_LAYER_I, -
"audio_decoder.mp1", "audio_encoder.mp1" }, -
{ MEDIA_MIMETYPE_AUDIO_MPEG_LAYER_II, -
"audio_decoder.mp2", "audio_encoder.mp2" }, -
{ MEDIA_MIMETYPE_AUDIO_AMR_NB, -
"audio_decoder.amrnb", "audio_encoder.amrnb" }, -
{ MEDIA_MIMETYPE_AUDIO_AMR_WB, -
"audio_decoder.amrwb", "audio_encoder.amrwb" }, -
{ MEDIA_MIMETYPE_AUDIO_AAC, -
"audio_decoder.aac", "audio_encoder.aac" }, -
{ MEDIA_MIMETYPE_AUDIO_VORBIS, -
"audio_decoder.vorbis", "audio_encoder.vorbis" }, -
{ MEDIA_MIMETYPE_AUDIO_G711_MLAW, -
"audio_decoder.g711mlaw", "audio_encoder.g711mlaw" }, -
{ MEDIA_MIMETYPE_AUDIO_G711_ALAW, -
"audio_decoder.g711alaw", "audio_encoder.g711alaw" }, -
{ MEDIA_MIMETYPE_VIDEO_AVC, -
"video_decoder.avc", "video_encoder.avc" }, -
{ MEDIA_MIMETYPE_VIDEO_MPEG4, -
"video_decoder.mpeg4", "video_encoder.mpeg4" }, -
{ MEDIA_MIMETYPE_VIDEO_H263, -
"video_decoder.h263", "video_encoder.h263" }, -
{ MEDIA_MIMETYPE_VIDEO_VPX, -
"video_decoder.vpx", "video_encoder.vpx" }, -
{ MEDIA_MIMETYPE_AUDIO_RAW, -
"audio_decoder.raw", "audio_encoder.raw" }, -
{ MEDIA_MIMETYPE_AUDIO_FLAC, -
"audio_decoder.flac", "audio_encoder.flac" }, -
}; -
-
static const size_t kNumMimeToRole = -
sizeof(kMimeToRole) / sizeof(kMimeToRole[0]); -
-
size_t i; -
for (i = 0; i < kNumMimeToRole; ++i) { -
if (!strcasecmp(mime, kMimeToRole[i].mime)) { -
break; -
} -
} -
-
if (i == kNumMimeToRole) { -
return; -
} -
-
const char *role = -
isEncoder ? kMimeToRole[i].encoderRole -
: kMimeToRole[i].decoderRole; -
-
if (role != NULL) { -
OMX_PARAM_COMPONENTROLETYPE roleParams; -
InitOMXParams(&roleParams); -
-
strncpy((char *)roleParams.cRole, -
role, OMX_MAX_STRINGNAME_SIZE - 1); -
-
roleParams.cRole[OMX_MAX_STRINGNAME_SIZE - 1] = '\0'; -
-
status_t err = omx->setParameter( -
node, OMX_IndexParamStandardCompon entRole, -
&roleParams, sizeof(roleParams)); -
-
if (err != OK) { -
ALOGW("Failed to set standard component role '%s'.", role); -
} -
} - }
这里是解码器因此role ==
audio_decoder.mp3
这里主要执行了如下步骤:
InitOMXParams(&roleParams);
status_t err =
omx->setParameter(
其中InitOMXParams主要是初始化roleParams变量
主要是靠setParameter来完成工作:记录下传递进来的参数:OMX_IndexParamStandardCompon
具体调用的是OMX的方法(service端的,不了解可参考第一部分mClient->connect的介绍)
- status_t
OMX::setParameter( -
node_id node, OMX_INDEXTYPE index, -
const void *params, size_t size) { -
return findInstance(node)->setParameter( -
index, params, size); - }
首先是通过nodeID得到了OMXNodeInstance(这里OMXNodeInstance是封装了observer的实例)
继续进入instance
- status_t
OMXNodeInstance::setParameter( -
OMX_INDEXTYPE index, const void *params, size_t size) { -
Mutex::Autolock autoLock(mLock); -
-
OMX_ERRORTYPE err = OMX_SetParameter( -
mHandle, index, const_cast<<span class="tag-name">void *>(params)); -
-
return StatusFromOMXError(err); - }
看下 OMX_SetParameter实现
- #define
OMX_SetParameter( \ -
hComponent, \ -
nParamIndex, \ -
pComponentParameterStruc ture) \ -
((OMX_COMPONENTTYPE*)hComponent)->SetParameter( \ -
hComponent, \ -
nParamIndex, \ -
pComponentParameterStruc ture)
这里可以看到主要是通过OMX_COMPONENTTYPE对象(也就是SoftMP3父类构造函数初始化过的对象)来完成工作
这里在SoftOMXComponent没有具体实现,在SimpleSoftOMXComponent中,如下
- OMX_ERRORTYPE
SimpleSoftOMXComponent::setParameter( -
OMX_INDEXTYPE index, const OMX_PTR params) { -
Mutex::Autolock autoLock(mLock); -
-
CHECK(isSetParameterAllowed(index, params)); -
-
return internalSetParameter(index, params); - }
- OMX_ERRORTYPE
SoftMP3::internalSetParameter( -
OMX_INDEXTYPE index, const OMX_PTR params) { -
switch (index) { -
case OMX_IndexParamStandardCompon entRole: -
{ -
const OMX_PARAM_COMPONENTROLETYPE *roleParams = -
(const OMX_PARAM_COMPONENTROLETYPE *)params; -
-
if (strncmp((const char *)roleParams->cRole, -
"audio_decoder.mp3", -
OMX_MAX_STRINGNAME_SIZE - 1)) { -
return OMX_ErrorUndefined; -
} -
-
return OMX_ErrorNone; -
} -
-
default: -
return SimpleSoftOMXComponent::internalSetParameter(index, params); -
} - }
这里需要注意的是调用的internalSetParameter是SoftMP3的实现,而不是SimpleSoftOMXComponent 中的,代码如下
传入的命令为:OMX_IndexParamStandardCompon
这里通过OMXCodec变量借由OMXNodeInstance得到OMX_COMPONENTYPE句柄,就获得了与解码器实际通信的能力。
4 解码过程介绍
下面介绍如何通过OMXCodec驱动解码一帧数据
这里建立了OMXCodec实例之后,在awesomeplayer中的audioplayer的fillbuffer中
mAudioPlayer便通过mSource->read(&mInputBuffer, &options来读取pcm数据
这里mSource为mAudioSource
看下read函数
具体代码在OMXCodec.cpp中,我们分段来看
- status_t
OMXCodec::read( -
-
MediaBuffer **buffer, const ReadOptions *options) { -
-
status_t err = OK; -
-
*buffer = NULL; -
-
Mutex::Autolock autoLock(mLock); -
-
if (mState != EXECUTING && mState != RECONFIGURING) { -
-
return UNKNOWN_ERROR; -
- }
前面设置好参数后,会经过几次回调将状态设置成EXECUTING
这里需要注意的是mInitialBufferSubmit默认是true
- if
(mInitialBufferSubmit) { -
mInitialBufferSubmit = false; -
drainInputBuffers(); -
fillOutputBuffers(); - }
drainInputBuffers可以认为从extractor读取一包数据
fillOutputBuffers是解码一包数据并放在输出buffer中
忽略seek代码
- size_t
index = *mFilledBuffers.begin(); mFilledBuffers.erase(mFilledBuffers.begin()); -
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index); -
CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US); -
info->mStatus = OWNED_BY_CLIENT; -
info->mMediaBuffer->add_ref(); -
if (mSkipCutBuffer != NULL) { -
mSkipCutBuffer->submit(info->mMediaBuffer); -
} -
*buffer = info->mMediaBuffer; -
return OK; - }
这里我们将输出缓冲区中的bufferinfo取出来,并将其中的 mediabuffer赋值给传递进来的参数buffer,当decoder解码出来数据后会将存放数据的buffer放在mFilledBuffers 中,因此audioplayer每次从omxcodec读取数据时,会从mFilledBuffers中取。区别在于,当mFilledBuffers为 空时会等待解码器解码并填充数据,如果有数据,则直接取走数据。
在audioplayer->start代码中用到这里返回的mediabuffer做了一些事情,后面设置了一些参数如
info->mStatus = OWNED_BY_CLIENT;
说明此info归client所有,client释放后会归还的,这里多啰嗦一句,通过设置mStatus可以让这一块内存由不同的模块来支配,如其角色有如下几个:
显然component是解码器的,client是外部比如audioplayer的。
info->mMediaBuffer->add_ref();是增加一个引用,估计release的时候用~~
下面着重分析下如何从extractor读数据,和如何解码数据
4.1 看下
- <<span
class="tag-name">pre
name="code" class="html">void OMXCodec::drainInputBuffers() { -
for (size_t i = 0; i < buffers->size(); ++i) { -
BufferInfo *info = &buffers->editItemAt(i); -
if (info->mStatus != OWNED_BY_US) { -
continue; -
} -
if (!drainInputBuffer(info)) { -
break; -
} -
if (mFlags & kOnlySubmitOneInputBuffe rAtOneTime) { -
break; -
} -
} - }</<span
class="tag-name">pre>
这里解释下,我们可能申请了多个输入缓冲区,因此是一个循环,先检查我们有没有权限使用即OWNED_BY_US,这一缓冲区获取完数据后会检测
kOnlySubmitOneInputBuffe
下面继续跟进drainInputBuffer(info),忽略无关代码:
- bool
OMXCodec::drainInputBuffer(BufferInfo *info) { -
********** -
status_t err; -
bool signalEOS = false; -
int64_t timestampUs = 0; -
size_t offset = 0; -
int32_t n = 0; - for
(;;) { -
MediaBuffer *srcBuffer; - err
= mSource->read(&srcBuffer); -
size_t remainingBytes = info->mSize - offset; -
下面是判断从extractor读取到的数据是不是超过了总大小 -
if (srcBuffer->range_length() > remainingBytes) { -
if (offset == 0) { -
srcBuffer->release(); -
srcBuffer = NULL; -
-
setState(ERROR); -
return false; -
} -
mLeftOverBuffer = srcBuffer; -
break; -
} memcpy((uint8_t *)info->mData + offset, -
(const uint8_t *)srcBuffer->data() -
+ srcBuffer->range_offset(), -
srcBuffer->range_length()); -
offset += srcBuffer->range_length(); - if
(releaseBuffer) { -
srcBuffer->release(); -
srcBuffer = NULL; -
} -
数据读取完毕后将srcBufferrelease掉 - }
-
err = mOMX->emptyBuffer( -
mNode, info->mBuffer, 0, offset, -
flags, timestampUs); -
info->mStatus = OWNED_BY_COMPONENT; - }
这里读取完毕后将缓冲区的状态设置成OWNED_BY_COMPONENT
解码器就可以解码了
这里可以看出来读取数据时实现了一次拷贝~~,而不是用的同一块缓冲区,注意下
读取数据可以参考前面介绍的extractor的内容,比较简单不说了。
下面看读取数据完毕后调用mOMX->emptyBuffer都干了些啥
通过前面我们很容易的理解实际调用的是
omx::emptybufferèOMXNodeInstance::emptyBuffer,
从代码可以看到最终调用的是
((OMX_COMPONENTTYPE*)hComponent)->EmptyThisBuffer()
实际代码在SimpleSoftOMXComponent.cpp中,具体如下
- OMX_ERRORTYPE
SimpleSoftOMXComponent::emptyThisBuffer( -
OMX_BUFFERHEADERTYPE *buffer) { -
sp<<span class="tag-name">AMessage> msg = new AMessage(kWhatEmptyThisBuffer, mHandler->id()); -
msg->setPointer("header", buffer); -
msg->post(); -
return OMX_ErrorNone; - }
可以看到就是发了一条命令kWhatEmptyThisBuffer
通过handler->id确定了自己发的还得自己收,处理函数如下:
- void
SimpleSoftOMXComponent::onMessageReceived(const sp<<span class="tag-name">AMessage> &msg) { -
Mutex::Autolock autoLock(mLock); -
uint32_t msgType = msg->what(); -
ALOGV("msgType = %d", msgType); -
switch (msgType) { - ********
-
case kWhatEmptyThisBuffer: -
case kWhatFillThisBuffer: -
{ -
OMX_BUFFERHEADERTYPE *header; -
CHECK(msg->findPointer("header", (void **)&header)); -
CHECK(mState == OMX_StateExecuting && mTargetState == mState); -
bool found = false; -
size_t portIndex = (kWhatEmptyThisBuffer == msgType)? header->nInputPortIndex: header->nOutputPortIndex; -
PortInfo *port = &mPorts.editItemAt(portIndex); -
for (size_t j = 0; j < port->mBuffers.size(); ++j) { -
BufferInfo *buffer = &port->mBuffers.editItemAt(j); -
-
if (buffer->mHeader == header) { -
CHECK(!buffer->mOwnedByUs); -
buffer->mOwnedByUs = true; -
CHECK((msgType == kWhatEmptyThisBuffer -
&& port->mDef.eDir == OMX_DirInput)|| (port->mDef.eDir == OMX_DirOutput)); -
port->mQueue.push_back(buffer); -
onQueueFilled(portIndex); -
found = true; -
break; -
} -
} -
CHECK(found); -
break; -
} -
default: -
TRESPASS(); -
break; -
} - }
从代码这里来看这两个case都走同一套代码,而且都是通过onQueueFilled来处理,这样我们就引出了实际的处理函数,也就是onQueueFilled,
以mp3为例这里具体实现在SoftMP3中。
具体解释看代码中注释
- void
SoftMP3::onQueueFilled(OMX_U32 portIndex) { -
if (mSignalledError || mOutputPortSettingsChang e != NONE) { -
return; -
} -
-
获取输入输出链表 -
-
List<<span class="tag-name">BufferInfo *> &inQueue = getPortQueue(0); -
List<<span class="tag-name">BufferInfo *> &outQueue = getPortQueue(1); -
-
while (!inQueue.empty() && !outQueue.empty()) { -
-
各自取输入输出缓冲区中的第一个缓冲区 -
BufferInfo *inInfo = *inQueue.begin(); -
OMX_BUFFERHEADERTYPE *inHeader = inInfo->mHeader; -
-
BufferInfo *outInfo = *outQueue.begin(); -
OMX_BUFFERHEADERTYPE *outHeader = outInfo->mHeader; -
-
判断缓冲区是不是没有数据,若果第一个都没有那就是没有 -
-
if (inHeader->nFlags & OMX_BUFFERFLAG_EOS) { -
inQueue.erase(inQueue.begin()); -
inInfo->mOwnedByUs = false; -
notifyEmptyBufferDone(inHeader); -
-
if (!mIsFirst) { -
// pad the end of the stream with 529 samples, since that many samples -
// were trimmed off the beginning when decoding started -
outHeader->nFilledLen = -
kPVMP3DecoderDelay * mNumChannels * sizeof(int16_t); -
-
memset(outHeader->pBuffer, 0, outHeader->nFilledLen); -
} else { -
// Since we never discarded frames from the start, we won't have -
// to add any padding at the end either. -
outHeader->nFilledLen = 0; -
} -
-
outHeader->nFlags = OMX_BUFFERFLAG_EOS; -
-
outQueue.erase(outQueue.begin()); -
outInfo->mOwnedByUs = false; -
notifyFillBufferDone(outHeader); -
return; -
} -
-
-
-
如果offset==0说明是第一包的开头,需要读取pts,请结合extractor理解 -
-
if (inHeader->nOffset == 0) { -
mAnchorTimeUs = inHeader->nTimeStamp; -
mNumFramesOutput = 0; -
} -
-
mConfig->pInputBuffer = -
inHeader->pBuffer + inHeader->nOffset; -
-
mConfig->inputBufferCurrentLength = inHeader->nFilledLen; -
mConfig->inputBufferMaxLength = 0; -
mConfig->inputBufferUsedLength = 0; -
-
mConfig->outputFrameSize = kOutputBufferSize / sizeof(int16_t); -
-
mConfig->pOutputBuffer = -
reinterpret_cast<<span class="tag-name">int16_t *>(outHeader->pBuffer); -
-
ERROR_CODE decoderErr; -
-
上面是配置参数 下面调用自己的解码器进行解码 -
if ((decoderErr = pvmp3_framedecoder(mConfig, mDecoderBuf)) -
!= NO_DECODING_ERROR) { -
***出错处理* -
-
这里注意如果解码失败,则填充0数据,也就是静音帧 -
// play silence instead. -
memset(outHeader->pBuffer, -
0, -
mConfig->outputFrameSize * sizeof(int16_t)); -
-
mConfig->inputBufferUsedLength = inHeader->nFilledLen; -
} else if (mConfig->samplingRate != mSamplingRate -
|| mConfig->num_channels != mNumChannels) { -
-
这里说明参数发生了改变,即采样率等改变了,需要重新设置输出 -
mSamplingRate = mConfig->samplingRate; -
mNumChannels = mConfig->num_channels; -
-
notify(OMX_EventPortSettingsChanged , 1, 0, NULL); -
mOutputPortSettingsChang e = AWAITING_DISABLED; -
return; -
} -
-
if (mIsFirst) { -
mIsFirst = false; -
// The decoder delay is 529 samples, so trim that many samples off -
// the start of the first output buffer. This essentially makes this -
// decoder have zero delay, which the rest of the pipeline assumes. -
outHeader->nOffset = -
kPVMP3DecoderDelay * mNumChannels * sizeof(int16_t); -
-
outHeader->nFilledLen = -
mConfig->outputFrameSize * sizeof(int16_t) - outHeader->nOffset; -
} else { -
outHeader->nOffset = 0; -
outHeader->nFilledLen = mConfig->outputFrameSize * sizeof(int16_t); -
} -
-
outHeader->nTimeStamp = -
mAnchorTimeUs -
+ (mNumFramesOutput * 1000000ll) / mConfig->samplingRate; -
-
outHeader->nFlags = 0; -
-
CHECK_GE(inHeader->nFilledLen, mConfig->inputBufferUsedLength); -
-
inHeader->nOffset += mConfig->inputBufferUsedLength; -
inHeader->nFilledLen -= mConfig->inputBufferUsedLength; -
-
mNumFramesOutput += mConfig->outputFrameSize / mNumChannels; -
-
如果输入缓冲区数据都解码完了,则调用notifyEmptyBufferDone -
-
if (inHeader->nFilledLen == 0) { -
inInfo->mOwnedByUs = false; -
inQueue.erase(inQueue.begin()); -
inInfo = NULL; -
notifyEmptyBufferDone(inHeader); -
inHeader = NULL; -
} -
-
outInfo->mOwnedByUs = false; -
outQueue.erase(outQueue.begin()); -
outInfo = NULL; -
-
这是将解码出来的数据告诉外部,通过调用notifyFillBufferDone -
-
notifyFillBufferDone(outHeader); -
outHeader = NULL; -
} - }
下面分析下,如何将输入缓冲区释放和将输出缓冲区中的数据传递出去
A、输入部分的清空
- void
SoftOMXComponent::notifyEmptyBufferDone(OMX_BUFFERHEADERTYPE *header) { -
-
(*mCallbacks->EmptyBufferDone)( -
-
mComponent, mComponent->pApplicationPrivate, header); -
- }
通知外面我们emptythisbuffer完工了,具体调用的是OMXNodeInstance中的方法,具体怎么传进去的,大家可以自己分析下:
- OMX_ERRORTYPE
OMXNodeInstance::OnEmptyBufferDone( -
OMX_IN OMX_HANDLETYPE hComponent, -
OMX_IN OMX_PTR pAppData, -
OMX_IN OMX_BUFFERHEADERTYPE* pBuffer) { -
OMXNodeInstance *instance = static_cast<<span class="tag-name">OMXNodeInstance *>(pAppData); -
if (instance->mDying) { -
return OMX_ErrorNone; -
} -
return instance->owner()->OnEmptyBufferDone(instance->nodeID(), pBuffer); - }
OMXNodeInstance的ownner是OMX,因此代码为
- OMX_ERRORTYPE
OMX::OnEmptyBufferDone( -
-
node_id node, OMX_IN OMX_BUFFERHEADERTYPE *pBuffer) { -
-
ALOGV("OnEmptyBufferDone buffer=%p", pBuffer); -
-
omx_message msg; -
-
msg.type = omx_message::EMPTY_BUFFER_DONE; -
-
msg.node = node; -
-
msg.u.buffer_data.buffer = pBuffer; -
-
findDispatcher(node)->post(msg); -
-
return OMX_ErrorNone; -
- }
- sp<<span
class="tag-name">OMX::CallbackDispatcher>
OMX::findDispatcher(node_id node) { -
Mutex::Autolock autoLock(mLock); -
ssize_t index = mDispatchers.indexOfKey(node); -
return index < 0 ? NULL : mDispatchers.valueAt(index); - }
这里mDispatcher在之前allocateNode中通过mDispatchers.add(*node, new CallbackDispatcher(instance)); 创建的
看下实际的实现可知道,CallbackDispatcher的post方法最终会调用dispatch
- void
OMX::CallbackDispatcher::dispatch(const omx_message &msg) { -
if (mOwner == NULL) { -
ALOGV("Would have dispatched a message to a node that's already gone."); -
return; -
} -
mOwner->onMessage(msg); - }
而owner是OMXNodeInstance,因此消息饶了一圈还是到了OMXNodeInstance的OnMessage方法接收了
- void
OMXNodeInstance::onMessage(const omx_message &msg) { -
if (msg.type == omx_message::FILL_BUFFER_DONE) { -
OMX_BUFFERHEADERTYPE *buffer = -
static_cast<<span class="tag-name">OMX_BUFFERHEADERTYPE *>( -
msg.u.extended_buffer_data.buffer); -
-
BufferMeta *buffer_meta = -
static_cast<<span class="tag-name">BufferMeta *>(buffer->pAppPrivate); -
-
buffer_meta->CopyFromOMX(buffer); -
} -
-
mObserver->onMessage(msg); - }
而onMessage又将消息传递到 mObserver中,也就是在OMXCodec::Create中构造的OMXCodecObserver对象,其OnMessage实现如下
- virtual
void onMessage(const omx_message &msg) { -
sp<<span class="tag-name">OMXCodec> codec = mTarget.promote(); -
-
if (codec.get() != NULL) { -
Mutex::Autolock autoLock(codec->mLock); -
codec->on_message(msg); -
codec.clear(); -
} - }
最终还是传递给了OMXCodec里,具体看下:
- void
OMXCodec::on_message(const omx_message &msg) { - switch
(msg.type) { - ************
-
case omx_message::EMPTY_BUFFER_DONE: -
{ -
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer; -
Vector<<span class="tag-name">BufferInfo> *buffers = &mPortBuffers[kPortIndexInput]; -
size_t i = 0; -
while (i < buffers->size() && (*buffers)[i].mBuffer != buffer) { -
++i; -
} -
BufferInfo* info = &buffers->editItemAt(i); -
info->mStatus = OWNED_BY_US; -
// Buffer could not be released until empty buffer done is called. -
if (info->mMediaBuffer != NULL) { -
info->mMediaBuffer->release(); -
info->mMediaBuffer = NULL; -
} -
drainInputBuffer(&buffers->editItemAt(i)); -
break; -
} - ****************
- }
这部分很绕,但搞清楚就好了,请大家仔细阅读,此处虽然调用了info->mMediaBuffer->release();但是由于其引用始终大于0,因此不会真正的release
二是当release完毕后,会调用drainInputBuffer(&buffers->editItemAt(i));来填充数据
也就是说当我们启动一次解码播放后,会在此处循环读取数和据解码数据。而输出数据在后面的filloutbuffer中。
B、输出数据的清空notifyFillBufferDone(outHeader);
- void
SoftOMXComponent::notifyFillBufferDone(OMX_BUFFERHEADERTYPE *header) { -
(*mCallbacks->FillBufferDone)( -
mComponent, mComponent->pApplicationPrivate, header); - }
- OMX_ERRORTYPE
OMX::OnFillBufferDone( -
node_id node, OMX_IN OMX_BUFFERHEADERTYPE *pBuffer) { -
ALOGV("OnFillBufferDone buffer=%p", pBuffer); -
omx_message msg; -
msg.type = omx_message::FILL_BUFFER_DONE; -
msg.node = node; -
msg.u.extended_buffer_data.buffer = pBuffer; -
msg.u.extended_buffer_data.range_offset = pBuffer->nOffset; -
msg.u.extended_buffer_data.range_length = pBuffer->nFilledLen; -
msg.u.extended_buffer_data.flags = pBuffer->nFlags; -
msg.u.extended_buffer_data.timestamp = pBuffer->nTimeStamp; -
msg.u.extended_buffer_data.platform_private = pBuffer->pPlatformPrivate; -
msg.u.extended_buffer_data.data_ptr = pBuffer->pBuffer; -
findDispatcher(node)->post(msg); -
return OMX_ErrorNone; - }
最终处理在OMXCodec.cpp中
- void
OMXCodec::on_message(const omx_message &msg) { - {
- case
omx_message::FILL_BUFFER_DONE: -
info->mStatus = OWNED_BY_US; -
mFilledBuffers.push_back(i); -
mBufferFilled.signal(); -
break; -
} - }
主体就这么几句,先将mStatus设置成OWNED_BY_US,这样component便不能操作了,后面将这个buffer
push到mFilledBuffers中。
4.2 fillOutputBuffers
- void
OMXCodec::fillOutputBuffers() { -
CHECK_EQ((int)mState, (int)EXECUTING); -
Vector<<span class="tag-name">BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput]; -
for (size_t i = 0; i < buffers->size(); ++i) { -
BufferInfo *info = &buffers->editItemAt(i); -
if (info->mStatus == OWNED_BY_US) { -
fillOutputBuffer(&buffers->editItemAt(i)); -
} -
} - }
找到一个输出缓冲区bufferinfo,启动输出
- void
OMXCodec::fillOutputBuffer(BufferInfo *info) { -
************** -
status_t err = mOMX->fillBuffer(mNode, info->mBuffer); -
info->mStatus = OWNED_BY_COMPONENT; - }
下面和解码流程类似,我们依次来看:
- status_t
OMXNodeInstance::fillBuffer(OMX::buffer_id buffer) { -
Mutex::Autolock autoLock(mLock); -
OMX_BUFFERHEADERTYPE *header = (OMX_BUFFERHEADERTYPE *)buffer; -
header->nFilledLen = 0; -
header->nOffset = 0; -
header->nFlags = 0; -
OMX_ERRORTYPE err = OMX_FillThisBuffer(mHandle, header); -
return StatusFromOMXError(err); - }
进行一些初始化后,调用进入了softMP3中,也就是
- OMX_ERRORTYPE
SimpleSoftOMXComponent::fillThisBuffer( -
OMX_BUFFERHEADERTYPE *buffer) { -
sp<<span class="tag-name">AMessage> msg = new AMessage(kWhatFillThisBuffer, mHandler->id()); -
msg->setPointer("header", buffer); -
msg->post(); -
return OMX_ErrorNone; - }
同理,接收程序也在本文件中:
- void
SimpleSoftOMXComponent::onMessageReceived(const sp<<span class="tag-name">AMessage> &msg) { -
Mutex::Autolock autoLock(mLock); -
uint32_t msgType = msg->what(); -
ALOGV("msgType = %d", msgType); -
switch (msgType) { -
case kWhatEmptyThisBuffer: -
case kWhatFillThisBuffer: -
{ -
OMX_BUFFERHEADERTYPE *header; -
CHECK(msg->findPointer("header", (void **)&header)); -
CHECK(mState == OMX_StateExecuting && mTargetState == mState); -
bool found = false; -
size_t portIndex = (kWhatEmptyThisBuffer == msgType)? -
header->nInputPortIndex: header->nOutputPortIndex; -
PortInfo *port = &mPorts.editItemAt(portIndex); -
for (size_t j = 0; j < port->mBuffers.size(); ++j) { -
BufferInfo *buffer = &port->mBuffers.editItemAt(j); -
if (buffer->mHeader == header) { -
CHECK(!buffer->mOwnedByUs); -
buffer->mOwnedByUs = true; -
CHECK((msgType == kWhatEmptyThisBuffer -
&& port->mDef.eDir == OMX_DirInput) -
|| (port->mDef.eDir == OMX_DirOutput)); -
port->mQueue.push_back(buffer); -
onQueueFilled(portIndex); -
found = true; -
break; -
} -
} -
CHECK(found); -
break; -
} -
default: -
TRESPASS(); -
break; -
} -
- }
也会调用void
SoftMP3::onQueueFilled执行一次解码操作,然后再通过
notifyEmptyBufferDone(inHeader);
notifyFillBufferDone(outHeader);
两个函数来推进播放进度。
http://www.cnblogs.com/shakin/p/4741242.html