Android音频录制核心:RecordThread::threadLoop解析_audioflinger::recordthread::threadloop 函数分析
RecordThread::threadLoop()是 Android 音频系统(AudioFlinger)中音频录制线程的循环主函数,负责从底层驱动/HAL读取音频数据、管理录音 Track、处理音效链、同步时间戳等。下面我将详细讲解音频数据的处理流程,以及数据的存放地址和相关机制。
函数路径:frameworks/av/services/audioflinger/Threads.cpp
源码:
bool RecordThread::threadLoop(){ nsecs_t lastWarning = 0; inputStandBy();reacquire_wakelock: { audio_utils::lock_guard _l(mutex()); acquireWakeLock_l(); } // used to request a deferred sleep, to be executed later while mutex is unlocked uint32_t sleepUs = 0; // timestamp correction enable is determined under lock, used in processing step. bool timestampCorrectionEnabled = false; int64_t lastLoopCountRead = -2; // never matches \"previous\" loop, when loopCount = 0. // loop while there is work to do for (int64_t loopCount = 0;; ++loopCount) { // loopCount used for statistics tracking // Note: these sp are released at the end of the for loop outside of the mutex() lock. sp activeTrack; std::vector<sp> oldActiveTracks; Vector<sp> effectChains; // activeTracks accumulates a copy of a subset of mActiveTracks Vector<sp> activeTracks; // reference to the (first and only) active fast track sp fastTrack; // reference to a fast track which is about to be removed sp fastTrackToRemove; bool silenceFastCapture = false; { // scope for mutex() audio_utils::unique_lock _l(mutex()); processConfigEvents_l(); // check exitPending here because checkForNewParameters_l() and // checkForNewParameters_l() can temporarily release mutex() if (exitPending()) { break; } // sleep with mutex unlocked if (sleepUs > 0) { ATRACE_BEGIN(\"sleepC\"); (void)mWaitWorkCV.wait_for(_l, std::chrono::microseconds(sleepUs)); ATRACE_END(); sleepUs = 0; continue; } // if no active track(s), then standby and release wakelock size_t size = mActiveTracks.size(); if (size == 0) { standbyIfNotAlreadyInStandby(); // exitPending() can\'t become true here releaseWakeLock_l(); ALOGV(\"RecordThread: loop stopping\"); // go to sleep mWaitWorkCV.wait(_l); ALOGV(\"RecordThread: loop starting\"); goto reacquire_wakelock; } bool doBroadcast = false; bool allStopped = true; for (size_t i = 0; i isTerminated()) { if (activeTrack->isFastTrack()) { ALOG_ASSERT(fastTrackToRemove == 0); fastTrackToRemove = activeTrack; } removeTrack_l(activeTrack); mActiveTracks.remove(activeTrack); size--; continue; } IAfTrackBase::track_state activeTrackState = activeTrack->state(); switch (activeTrackState) { case IAfTrackBase::PAUSING: mActiveTracks.remove(activeTrack); activeTrack->setState(IAfTrackBase::PAUSED); if (activeTrack->isFastTrack()) { ALOGV(\"%s fast track is paused, thus removed from active list\", __func__); // Keep a ref on fast track to wait for FastCapture thread to get updated // state before potential track removal fastTrackToRemove = activeTrack; } doBroadcast = true; size--; continue; case IAfTrackBase::STARTING_1: sleepUs = 10000; i++; allStopped = false; continue; case IAfTrackBase::STARTING_2: doBroadcast = true; if (mStandby) { mThreadMetrics.logBeginInterval(); mThreadSnapshot.onBegin(); mStandby = false; } activeTrack->setState(IAfTrackBase::ACTIVE); allStopped = false; break; case IAfTrackBase::ACTIVE: allStopped = false; break; case IAfTrackBase::IDLE: // cannot be on ActiveTracks if idle case IAfTrackBase::PAUSED: // cannot be on ActiveTracks if paused case IAfTrackBase::STOPPED: // cannot be on ActiveTracks if destroyed/terminated default: LOG_ALWAYS_FATAL(\"%s: Unexpected active track state:%d, id:%d, tracks:%zu\", __func__, activeTrackState, activeTrack->id(), size); } if (activeTrack->isFastTrack()) { ALOG_ASSERT(!mFastTrackAvail); ALOG_ASSERT(fastTrack == 0); // if the active fast track is silenced either: // 1) silence the whole capture from fast capture buffer if this is // the only active track // 2) invalidate this track: this will cause the client to reconnect and possibly // be invalidated again until unsilenced bool invalidate = false; if (activeTrack->isSilenced()) { if (size > 1) { invalidate = true; } else { silenceFastCapture = true; } } // Invalidate fast tracks if access to audio history is required as this is not // possible with fast tracks. Once the fast track has been invalidated, no new // fast track will be created until mMaxSharedAudioHistoryMs is cleared. if (mMaxSharedAudioHistoryMs != 0) { invalidate = true; } if (invalidate) { activeTrack->invalidate(); fastTrackToRemove = activeTrack; removeTrack_l(activeTrack); mActiveTracks.remove(activeTrack); size--; continue; } fastTrack = activeTrack; } activeTracks.add(activeTrack); i++; } mActiveTracks.updatePowerState_l(this); // check if traces have been enabled. bool atraceEnabled = ATRACE_ENABLED(); if (atraceEnabled != mAtraceEnabled) [[unlikely]] { mAtraceEnabled = atraceEnabled; if (atraceEnabled) { const auto devices = patchSourcesToString(&mPatch); for (const auto& track : activeTracks) { track->logRefreshInterval(devices); } } } updateMetadata_l(); if (allStopped) { standbyIfNotAlreadyInStandby(); } if (doBroadcast) { mStartStopCV.notify_all(); } // sleep if there are no active tracks to process if (activeTracks.isEmpty()) { if (sleepUs == 0) { sleepUs = kRecordThreadSleepUs; } continue; } sleepUs = 0; timestampCorrectionEnabled = isTimestampCorrectionEnabled_l(); lockEffectChains_l(effectChains); // We\'re exiting locked scope with non empty activeTracks, make sure // that we\'re not in standby mode which we could have entered if some // tracks were muted/unmuted. mStandby = false; } // thread mutex is now unlocked, mActiveTracks unknown, activeTracks.size() > 0 size_t size = effectChains.size(); for (size_t i = 0; i process_l(); } // Push a new fast capture state if fast capture is not already running, or cblk change if (mFastCapture != 0) { FastCaptureStateQueue *sq = mFastCapture->sq(); FastCaptureState *state = sq->begin(); bool didModify = false; FastCaptureStateQueue::block_t block = FastCaptureStateQueue::BLOCK_UNTIL_PUSHED; if (state->mCommand != FastCaptureState::READ_WRITE /* FIXME && (kUseFastMixer != FastMixer_Dynamic || state->mTrackMask > 1)*/) { if (state->mCommand == FastCaptureState::COLD_IDLE) { int32_t old = android_atomic_inc(&mFastCaptureFutex); if (old == -1) { (void) syscall(__NR_futex, &mFastCaptureFutex, FUTEX_WAKE_PRIVATE, 1); } } state->mCommand = FastCaptureState::READ_WRITE;#if 0 // FIXME mFastCaptureDumpState.increaseSamplingN(mAfThreadCallback->isLowRamDevice() ? FastThreadDumpState::kSamplingNforLowRamDevice : FastThreadDumpState::kSamplingN);#endif didModify = true; } audio_track_cblk_t *cblkOld = state->mCblk; audio_track_cblk_t *cblkNew = fastTrack != 0 ? fastTrack->cblk() : NULL; if (cblkNew != cblkOld) { state->mCblk = cblkNew; // block until acked if removing a fast track if (cblkOld != NULL) { block = FastCaptureStateQueue::BLOCK_UNTIL_ACKED; } didModify = true; } AudioBufferProvider* abp = (fastTrack != 0 && fastTrack->isPatchTrack()) ? reinterpret_cast(fastTrack.get()) : nullptr; if (state->mFastPatchRecordBufferProvider != abp) { state->mFastPatchRecordBufferProvider = abp; state->mFastPatchRecordFormat = fastTrack == 0 ? AUDIO_FORMAT_INVALID : fastTrack->format(); didModify = true; } if (state->mSilenceCapture != silenceFastCapture) { state->mSilenceCapture = silenceFastCapture; didModify = true; } sq->end(didModify); if (didModify) { sq->push(block);#if 0 if (kUseFastCapture == FastCapture_Dynamic) { mNormalSource = mPipeSource; }#endif } } // now run the fast track destructor with thread mutex unlocked fastTrackToRemove.clear(); // Read from HAL to keep up with fastest client if multiple active tracks, not slowest one. // Only the client(s) that are too slow will overrun. But if even the fastest client is too // slow, then this RecordThread will overrun by not calling HAL read often enough. // If destination is non-contiguous, first read past the nominal end of buffer, then // copy to the right place. Permitted because mRsmpInBuffer was over-allocated. int32_t rear = mRsmpInRear & (mRsmpInFramesP2 - 1); ssize_t framesRead = 0; // not needed, remove clang-tidy warning. const int64_t lastIoBeginNs = systemTime(); // start IO timing // If an NBAIO source is present, use it to read the normal capture\'s data if (mPipeSource != 0) { size_t framesToRead = min(mRsmpInFramesOA - rear, mRsmpInFramesP2 / 2); // The audio fifo read() returns OVERRUN on overflow, and advances the read pointer // to the full buffer point (clearing the overflow condition). Upon OVERRUN error, // we immediately retry the read() to get data and prevent another overflow. for (int retries = 0; retries 0, \"overrun on read from pipe, retry #%d\", retries); framesRead = mPipeSource->read((uint8_t*)mRsmpInBuffer + rear * mFrameSize, framesToRead); if (framesRead != OVERRUN) break; } const ssize_t availableToRead = mPipeSource->availableToRead(); if (availableToRead >= 0) { mMonopipePipeDepthStats.add(availableToRead); // PipeSource is the primary clock. It is up to the AudioRecord client to keep up. LOG_ALWAYS_FATAL_IF((size_t)availableToRead > mPipeFramesP2, \"more frames to read than fifo size, %zd > %zu\", availableToRead, mPipeFramesP2); const size_t pipeFramesFree = mPipeFramesP2 - availableToRead; const size_t sleepFrames = min(pipeFramesFree, mRsmpInFramesP2) / 2; ALOGVV(\"mPipeFramesP2:%zu mRsmpInFramesP2:%zu sleepFrames:%zu availableToRead:%zd\", mPipeFramesP2, mRsmpInFramesP2, sleepFrames, availableToRead); sleepUs = (sleepFrames * 1000000LL) / mSampleRate; } if (framesRead read( (uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize, &bytesRead); ATRACE_END(); if (result = 0) { mTimestamp.mPosition[ExtendedTimestamp::LOCATION_SERVER] += framesRead; mTimestamp.mTimeNs[ExtendedTimestamp::LOCATION_SERVER] = lastIoEndNs; } // Update server timestamp with kernel stats if (mPipeSource.get() == nullptr /* don\'t obtain for FastCapture, could block */) { int64_t position, time; if (mStandby) { mTimestampVerifier.discontinuity(audio_is_linear_pcm(mFormat) ? mTimestampVerifier.DISCONTINUITY_MODE_CONTINUOUS : mTimestampVerifier.DISCONTINUITY_MODE_ZERO); } else if (mSource->getCapturePosition(&position, &time) == NO_ERROR && time > mTimestamp.mTimeNs[ExtendedTimestamp::LOCATION_KERNEL]) { mTimestampVerifier.add(position, time, mSampleRate); if (timestampCorrectionEnabled) { ALOGVV(\"TS_BEFORE: %d %lld %lld\", id(), (long long)time, (long long)position); auto correctedTimestamp = mTimestampVerifier.getLastCorrectedTimestamp(); position = correctedTimestamp.mFrames; time = correctedTimestamp.mTimeNs; ALOGVV(\"TS_AFTER: %d %lld %lld\", id(), (long long)time, (long long)position); } mTimestamp.mPosition[ExtendedTimestamp::LOCATION_KERNEL] = position; mTimestamp.mTimeNs[ExtendedTimestamp::LOCATION_KERNEL] = time; // Note: In general record buffers should tend to be empty in // a properly running pipeline. // // Also, it is not advantageous to call get_presentation_position during the read // as the read obtains a lock, preventing the timestamp call from executing. } else { mTimestampVerifier.error(); } } // From the timestamp, input read latency is negative output write latency. const audio_input_flags_t flags = mInput != NULL ? mInput->flags : AUDIO_INPUT_FLAG_NONE; const double latencyMs = IAfRecordTrack::checkServerLatencySupported(mFormat, flags) ? - mTimestamp.getOutputServerLatencyMs(mSampleRate) : 0.; if (latencyMs != 0.) { // note 0. means timestamp is empty. mLatencyMs.add(latencyMs); } // Use this to track timestamp information // ALOGD(\"%s\", mTimestamp.toString().c_str()); if (framesRead < 0 || (framesRead == 0 && mPipeSource == 0)) { ALOGE(\"read failed: framesRead=%zd\", framesRead); // Force input into standby so that it tries to recover at next read attempt inputStandBy(); sleepUs = kRecordThreadSleepUs; } if (framesRead 0); mFramesRead += framesRead;#ifdef TEE_SINK (void)mTee.write((uint8_t*)mRsmpInBuffer + rear * mFrameSize, framesRead);#endif // If destination is non-contiguous, we now correct for reading past end of buffer. { size_t part1 = mRsmpInFramesP2 - rear; if ((size_t) framesRead > part1) { memcpy(mRsmpInBuffer, (uint8_t*)mRsmpInBuffer + mRsmpInFramesP2 * mFrameSize, (framesRead - part1) * mFrameSize); } } mRsmpInRear = audio_utils::safe_add_overflow(mRsmpInRear, (int32_t)framesRead); size = activeTracks.size(); // loop over each active track for (size_t i = 0; i isFastTrack()) { continue; } // TODO: This code probably should be moved to RecordTrack. // TODO: Update the activeTrack buffer converter in case of reconfigure. enum { OVERRUN_UNKNOWN, OVERRUN_TRUE, OVERRUN_FALSE } overrun = OVERRUN_UNKNOWN; // loop over getNextBuffer to handle circular sink for (;;) { activeTrack->sinkBuffer().frameCount = ~0; status_t status = activeTrack->getNextBuffer(&activeTrack->sinkBuffer()); size_t framesOut = activeTrack->sinkBuffer().frameCount; LOG_ALWAYS_FATAL_IF((status == OK) != (framesOut > 0)); // check available frames and handle overrun conditions // if the record track isn\'t draining fast enough. bool hasOverrun; size_t framesIn; activeTrack->resamplerBufferProvider()->sync(&framesIn, &hasOverrun); if (hasOverrun) { overrun = OVERRUN_TRUE; } if (framesOut == 0 || framesIn == 0) { break; } // Don\'t allow framesOut to be larger than what is possible with resampling // from framesIn. // This isn\'t strictly necessary but helps limit buffer resizing in // RecordBufferConverter. TODO: remove when no longer needed. if (audio_is_linear_pcm(activeTrack->format())) { framesOut = min(framesOut, destinationFramesPossible( framesIn, mSampleRate, activeTrack->sampleRate())); } if (activeTrack->isDirect()) { // No RecordBufferConverter used for direct streams. Pass // straight from RecordThread buffer to RecordTrack buffer. AudioBufferProvider::Buffer buffer; buffer.frameCount = framesOut; const status_t getNextBufferStatus = activeTrack->resamplerBufferProvider()->getNextBuffer(&buffer); if (getNextBufferStatus == OK && buffer.frameCount != 0) { ALOGV_IF(buffer.frameCount != framesOut, \"%s() read less than expected (%zu vs %zu)\", __func__, buffer.frameCount, framesOut); framesOut = buffer.frameCount; memcpy(activeTrack->sinkBuffer().raw, buffer.raw, buffer.frameCount * mFrameSize); activeTrack->resamplerBufferProvider()->releaseBuffer(&buffer); } else { framesOut = 0; ALOGE(\"%s() cannot fill request, status: %d, frameCount: %zu\", __func__, getNextBufferStatus, buffer.frameCount); } } else { // process frames from the RecordThread buffer provider to the RecordTrack // buffer framesOut = activeTrack->recordBufferConverter()->convert( activeTrack->sinkBuffer().raw, activeTrack->resamplerBufferProvider(), framesOut); } if (framesOut > 0 && (overrun == OVERRUN_UNKNOWN)) { overrun = OVERRUN_FALSE; } // MediaSyncEvent handling: Synchronize AudioRecord to AudioTrack completion. const ssize_t framesToDrop = activeTrack->synchronizedRecordState().updateRecordFrames(framesOut); if (framesToDrop == 0) { // no sync event, process normally, otherwise ignore. if (framesOut > 0) { activeTrack->sinkBuffer().frameCount = framesOut; // Sanitize before releasing if the track has no access to the source data // An idle UID receives silence from non virtual devices until active if (activeTrack->isSilenced()) { memset(activeTrack->sinkBuffer().raw, 0, framesOut * activeTrack->frameSize()); } activeTrack->releaseBuffer(&activeTrack->sinkBuffer()); } } if (framesOut == 0) { break; } } switch (overrun) { case OVERRUN_TRUE: // client isn\'t retrieving buffers fast enough if (!activeTrack->setOverflow()) { nsecs_t now = systemTime(); // FIXME should lastWarning per track? if ((now - lastWarning) > kWarningThrottleNs) { ALOGW(\"RecordThread: buffer overflow\"); lastWarning = now; } } break; case OVERRUN_FALSE: activeTrack->clearOverflow(); break; case OVERRUN_UNKNOWN: break; } // update frame information and push timestamp out activeTrack->updateTrackFrameInfo( activeTrack->serverProxy()->framesReleased(), mTimestamp.mPosition[ExtendedTimestamp::LOCATION_SERVER], mSampleRate, mTimestamp); }unlock: // enable changes in effect chain unlockEffectChains(effectChains); // effectChains doesn\'t need to be cleared, since it is cleared by destructor at scope end if (audio_has_proportional_frames(mFormat) && loopCount == lastLoopCountRead + 1) { const int64_t readPeriodNs = lastIoEndNs - mLastIoEndNs; const double jitterMs = TimestampVerifier::computeJitterMs( {framesRead, readPeriodNs}, {0, 0} /* lastTimestamp */, mSampleRate); const double processMs = (lastIoBeginNs - mLastIoEndNs) * 1e-6; audio_utils::lock_guard _l(mutex()); mIoJitterMs.add(jitterMs); mProcessTimeMs.add(processMs); } mThreadloopExecutor.process(); // update timing info. mLastIoBeginNs = lastIoBeginNs; mLastIoEndNs = lastIoEndNs; lastLoopCountRead = loopCount; } mThreadloopExecutor.process(); // process any remaining deferred actions. // deferred actions after this point are ignored. standbyIfNotAlreadyInStandby(); { audio_utils::lock_guard _l(mutex()); for (size_t i = 0; i < mTracks.size(); i++) { sp track = mTracks[i]; track->invalidate(); } mActiveTracks.clear(); mStartStopCV.notify_all(); } releaseWakeLock(); ALOGV(\"RecordThread %p exiting\", this); return false;}
一. 整体流程简介
大致流程如下:
1. 线程初始化、进入 Standby。
2. 检查活跃录音 Track 列表(mActiveTracks),如果无活动则休眠,否则进入后续处理。
3. 检查和更新活跃 Track 的状态(Active、Pausing、Paused、Starting 等)。
4. 处理 Effect Chain(音效链),对音频数据做处理。
5. 从底层音频输入源(HAL/pipe)读取音频数据,存放到线程内部缓冲区。
6. 将数据分发到各个活跃的 RecordTrack,并进行采样率转换、格式转换、静音处理、同步处理等。
7. 跟踪和更新时间戳信息。
8. 处理 buffer overrun(溢出)等异常情况。
9. 线程退出前,清理资源和状态。
二. 音频数据处理
2.1. HAL/pipe 数据读取与存储
缓冲区 mRsmpInBuffer
定义:`mRsmpInBuffer` 是 RecordThread 内部用于存放从底层读取到的原始音频数据的缓冲区。
类型:通常是 `void*` 或 `uint8_t*`,分配大小为若干帧(mRsmpInFramesP2,通常为 2 的幂),每帧大小为 mFrameSize(字节)。
用途:所有从 HAL/pipe 读取到的数据,首先写入这个环形缓冲区。这个缓冲区有自己的“rear”指针 mRsmpInRear,用于追踪写入的位置。
数据读取
如果使用 FastCapture,则数据从 pipe 读取(`mPipeSource->read()`),否则直接调用 HAL(`mSource->read()`)。
读取到的数据写入 mRsmpInBuffer 的相应位置(rear * mFrameSize)。
读取的字节数/帧数保存在 framesRead。
framesRead = mPipeSource->read((uint8_t*)mRsmpInBuffer + rear * mFrameSize, framesToRead);// 或status_t result = mSource->read((uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize, &bytesRead);framesRead = bytesRead / mFrameSize;
mRsmpInRear 会相应地自增,始终指向最新数据的尾部。
2.2. Track 数据分发与转换
活跃 Track(activeTracks)每个代表一个客户端(如一个 AudioRecord),它们从 RecordThread 获取音频数据。
Track 的缓冲区
每个 Track 有自己的缓冲区(通常通过 AudioTrackServerProxy/RecordTrackBuffer 组织),这块缓冲区用于与应用层共享数据。
线程通过 activeTrack->sinkBuffer().raw 指针拿到可写区域。
数据分发流程
1. getNextBuffer:获取 Track 可写缓冲区(sinkBuffer)。
2. 数据准备:如果是 Direct Track(直通),直接拷贝;否则通过 RecordBufferConverter 做采样率/格式转换。
3. 数据写入:将 mRsmpInBuffer 区域的数据转换后写入 sinkBuffer().raw。
4. releaseBuffer:通知 Track 数据已写好,应用层可以读取。
非直通流
framesOut = activeTrack->recordBufferConverter()->convert( activeTrack->sinkBuffer().raw, activeTrack->resamplerBufferProvider(), framesOut);
RecordBufferConverter负责:
若 Track 采样率/格式与 HAL 不同,进行重采样/重格式化。
若需要静音,填充 0。
若需要同步,可能跳过部分帧。
2.3. 总结数据流向
1. 数据起点:底层驱动/HAL -> 通过 mSource->read() 或 mPipeSource->read()。
2. 线程缓冲区:写入 RecordThread 的 mRsmpInBuffer(环形缓冲区,由 mRsmpInRear 指针维护读写位置)。
3. Track 分发:线程循环遍历所有活跃 Track,把数据(经必要转换后)写入各自 Track 的共享缓冲区 sinkBuffer().raw。
4. 应用层读取:应用层的 AudioRecord 从 Track 的共享缓冲区继续读取数据。
三. 关键结构与成员说明
mRsmpInBuffer:RecordThread 的输入音频环形缓冲区,存放底层读取的原始音频数据。
mRsmpInRear:指向 mRsmpInBuffer 中最新写入数据的尾部帧索引。
mActiveTracks:当前需要分发音频数据的 Track 列表。
activeTrack->sinkBuffer().raw:每个 Track 供应用层读取的音频数据缓冲区。
RecordBufferConverter:负责从 mRsmpInBuffer 取数据,并转换成 Track 所需格式后写入 sinkBuffer().raw。
mFrameSize:每帧字节数(= 通道数 * 采样位宽 / 8)。
mBufferSize:每次从 HAL 读取的字节数。
四. 总结
音频数据的处理和存放地址核心流程:
读取:底层 HAL/pipe -> mRsmpInBuffer(环形缓冲区)。
分发/转换:mRsmpInBuffer -> RecordBufferConverter -> 各 Track 的 sinkBuffer().raw。
应用层读取:AudioRecord 通过 Track 的 buffer 读取数据。
整个过程确保了多 Track 并发录音、格式灵活转换、高效内存复用、异常溢出处理,以及音效链和同步等附加功能。
五.补充
在 Android AudioFlinger 的录音通路(RecordThread)中,确实存在两个层级的环形缓冲区,分别承担不同的数据流转和功能。
1. 第一个环形缓冲区(HAL读取原始数据缓冲区)
RecordThread::mRsmpInBuffer
配套指针/索引:mRsmpInRear、mRsmpInFramesP2 等
作用:
存放从 HAL(或 FastCapture 管道)读取到的原始音频数据。
这是线程级共享的缓冲区,所有活跃 Track(录音客户端)都从这里获取新数据。
是一个环形缓冲区(ring buffer/circular buffer),利用 rear 指针循环写入。
举例
mSource->read((uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize, &bytesRead);
2. 第二个环形缓冲区(Track应用读取缓冲区)
每个 RecordTrack(活跃Track)都有自己的 buffer
一般是通过RecordTrack::mBuffer 或 serverProxy 管理
应用 AudioRecord 通过 mmap 或 binder 与这个 buffer 通信
作用:
存放已经分发并转换好的数据,供应用层(AudioRecord)读取。
是Track级别的缓冲区,每个活跃Track(即每个录音客户端)拥有独立 buffer。
也是环形缓冲区(ring buffer/circular buffer),通常由 Track 的 serverProxy 管理读写指针。
举例
// RecordThread往track缓冲区写(sinkBuffer().raw)memcpy(activeTrack->sinkBuffer().raw, src, framesOut * activeTrack->frameSize());// 应用AudioRecord从track缓冲区读AudioRecord::obtainBuffer() // 读track buffer
3. 数据流转关系
1. HAL/pipe 读取数据 ➔ RecordThread::mRsmpInBuffer
2. RecordThread循环分发(可做采样率/格式转换) ➔ 各 Track 的 buffer(sinkBuffer().raw)
3. 应用 AudioRecord 读取 ➔ Track buffer
这样设计的好处是:
解耦:多个Track可以并发读取同一份原始数据,互不干扰。
灵活性:每个Track可以有不同的采样率、格式、通道数,RecordThread分发时完成转换。
效率高:避免重复从HAL读取数据,多Track共享原始数据。
防止阻塞:每个Track自己的环形缓冲区,应用读取慢不会影响其他Track。
当遇到录音声音异常问题,要排查问题出现的哪个阶段,可以通过Android自带的TeeSink功能dump各阶段音频流的原始PCM音频数据进行分析。
下一篇Android 音频录制核心:ServerProxy::obtainBuffer揭秘-CSDN博客
讲解如何拿到第二个环形缓冲区的buffer的核心代码。