播放

开始播放使用H264VideoRTPSink的startplaying()函数,这个函数在MediaSink中定义,实际使用的是MediaSink::startPlaying()函数,它会调用H264or5VideoRTPSink::continuePlaying()函数,H264or5VideoRTPSink::continuePlaying() 调用MultiFramedRTPSink::continuePlaying()函数。在这个函数中进行组包。

Boolean MediaSink::startPlaying(MediaSource& source,afterPlayingFunc* afterFunc,void* afterClientData) {// Make sure we're not already being played:if (fSource != NULL) {envir().setResultMsg("This sink is already being played");return False;}// Make sure our source is compatible:if (!sourceIsCompatibleWithUs(source)) {envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");return False;}fSource = (FramedSource*)&source;fAfterFunc = afterFunc;fAfterClientData = afterClientData;return continuePlaying();
}

在H264or5VideoRTPSink::continuePlaying()中,创建H264or5Fragmenter对象,H264or5Fragmenter的fInputSource元素的值是H264VideoStreamFramer对象的指针。

Boolean H264or5VideoRTPSink::continuePlaying() {// First, check whether we have a 'fragmenter' class set up yet.// If not, create it now:envir() << "H264or5VideoRTPSink::continuePlaying" << "\n";if (fOurFragmenter == NULL) {fOurFragmenter = new H264or5Fragmenter(fHNumber, envir(), fSource, OutPacketBuffer::maxSize,ourMaxPacketSize() - 12/*RTP hdr size*/); //fMaxOutputPacketSize = 1456 -12} else {fOurFragmenter->reassignInputSource(fSource);}fSource = fOurFragmenter;// Then call the parent class's implementation:return MultiFramedRTPSink::continuePlaying();
}
H264or5Fragmenter::H264or5Fragmenter(int hNumber,UsageEnvironment& env, FramedSource* inputSource,unsigned inputBufferMax, unsigned maxOutputPacketSize): FramedFilter(env, inputSource),fHNumber(hNumber),fInputBufferSize(inputBufferMax+1), fMaxOutputPacketSize(maxOutputPacketSize) {fInputBuffer = new unsigned char[fInputBufferSize];reset();
}
void H264or5Fragmenter::reset() {fNumValidDataBytes = fCurDataOffset = 1;fSaveNumTruncatedBytes = 0;fLastFragmentCompletedNALUnit = True;
}

OutPacketBuffer::maxSize 是H264or5Fragmenter fInputBuffer的默认大小,H264or5Fragmenter通过parse()读取到的帧会复制到这个区域。

unsigned char* fInputBuffer;
unsigned fInputBufferSize;
unsigned fMaxOutputPacketSize;

Boolean MultiFramedRTPSink::continuePlaying() {// Send the first packet.// (This will also schedule any future sends.)buildAndSendPacket(True);return True;
}

RTP 的打包

buildAndSendPacket() 函数:

void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {nextTask() = NULL;fIsFirstPacket = isFirstPacket;// Set up the RTP header:unsigned rtpHdr = 0x80000000; // RTP version 2; marker ('M') bit not set (by default; it can be set later)rtpHdr |= (fRTPPayloadType<<16);rtpHdr |= fSeqNo; // sequence numberfOutBuf->enqueueWord(rtpHdr);// Note where the RTP timestamp will go.// (We can't fill this in until we start packing payload frames.)fTimestampPosition = fOutBuf->curPacketSize();fOutBuf->skipBytes(4); // leave a hole for the timestampfOutBuf->enqueueWord(SSRC());// Allow for a special, payload-format-specific header following the// RTP header:fSpecialHeaderPosition = fOutBuf->curPacketSize();fSpecialHeaderSize = specialHeaderSize();fOutBuf->skipBytes(fSpecialHeaderSize);// Begin packing as many (complete) frames into the packet as we can:fTotalFrameSpecificHeaderSizes = 0;fNoFramesLeft = False;fNumFramesUsedSoFar = 0;packFrame();
}
void MultiFramedRTPSink::packFrame() {// Get the next frame.// First, skip over the space we'll use for any frame-specific header:fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;// See if we have an overflow frame that was too big for the last pktif (fOutBuf->haveOverflowData()) { //当有overflow的数据先处理overflow的// Use this frame before reading a new one from the sourceunsigned frameSize = fOutBuf->overflowDataSize();struct timeval presentationTime = fOutBuf->overflowPresentationTime();unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();fOutBuf->useOverflowData();afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);} else {// Normal case: we need to read a new frame from the sourceif (fSource == NULL) return;fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),afterGettingFrame, this, ourHandleClosure, this);}
}
MultiFramedRTPSink::buildAndSendPacket() ---->MultiFramedRTPSink::packFrame()
---->FramedSource::getNextFrame()---->H264or5Fragmenter::doGetNextFrame()
---->FramedSource::getNextFrame()---->MPEGVideoStreamFramer::doGetNextFrame()
---->MPEGVideoStreamFramer::continueReadProcessing()----->H264or5VideoStreamParser::parse()

packframe主要分成两部分

  • 分片处理,获取到的帧超过mtu,需要将帧数据分片打包为rtp
  • 分帧处理从source中获取帧

packFrame() 调用H264or5Fragmenter的getNextFrame,存储的目的buffer是H264VideoRTPSink关联的OutputBuffer缓冲区。H264or5Fragmenter::doGetNextFrame()进行帧的分片处理,并且负责把分片后的数据拷贝到目的地址,multiFramedRTPSink::afterGettingFrame回调函数将分片后的数据。
在H264or5Fragmenter::doGetNextFrame()调用MPEGVideoStreamFramer::doGetNextFrame()获取下一帧,获取下一帧的动作就是从源数据流中得到数据帧。

分片

void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,afterGettingFunc* afterGettingFunc,void* afterGettingClientData,onCloseFunc* onCloseFunc,void* onCloseClientData) {// Make sure we're not already being read:if (fIsCurrentlyAwaitingData) {envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";envir().internalError();}fTo = to; //buffer 的开始地址fMaxSize = maxSize; //buffer的空间fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()fAfterGettingFunc = afterGettingFunc;fAfterGettingClientData = afterGettingClientData;fOnCloseFunc = onCloseFunc;fOnCloseClientData = onCloseClientData;fIsCurrentlyAwaitingData = True;doGetNextFrame();
}
void H264or5Fragmenter::doGetNextFrame() {if (fNumValidDataBytes == 1) { //fNumValidDataBytes 初始化为1// We have no NAL unit data currently in the buffer.  Read a new one://读取一帧数据buffer开头空出一个rtp payload第一个字节fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,afterGettingFrame, this,FramedSource::handleClosure, this);} else {// We have NAL unit data in the buffer.  There are three cases to consider:// 1. There is a new NAL unit in the buffer, and it's small enough to deliver//    to the RTP sink (as is).// 2. There is a new NAL unit in the buffer, but it's too large to deliver to//    the RTP sink in its entirety.  Deliver the first fragment of this data,//    as a FU packet, with one extra preceding header byte (for the "FU header").// 3. There is a NAL unit in the buffer, and we've already delivered some//    fragment(s) of this.  Deliver the next fragment of this data,//    as a FU packet, with two (H.264) or three (H.265) extra preceding header bytes//    (for the "NAL header" and the "FU header").//buffer 的空间小于最大包长 这种情况不因该发生if (fMaxSize < fMaxOutputPacketSize) { // shouldn't happenenvir() << "H264or5Fragmenter::doGetNextFrame(): fMaxSize ("<< fMaxSize << ") is smaller than expected\n";} else {fMaxSize = fMaxOutputPacketSize; //设置buffer空间为最大包长}fLastFragmentCompletedNALUnit = True; // by defaultif (fCurDataOffset == 1) { // case 1 or 2 fCurDataOffset初始化为1 开始时读完数据还没有更新数据偏移//有效数据小于最大包长,直接复制if (fNumValidDataBytes - 1 <= fMaxSize) { // case 1//不需要分片时 直接从naluheader 开始复制memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);fFrameSize = fNumValidDataBytes - 1;fCurDataOffset = fNumValidDataBytes;} else { // case 2 大于最大包长需要分片// We need to send the NAL unit data as FU packets.  Deliver the first// packet now.  Note that we add "NAL header" and "FU header" bytes to the front// of the packet (overwriting the existing "NAL header").if (fHNumber == 264) { //FU分片头fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28; // FU indicatorfInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F); // FU header (with S bit)} else { // 265u_int8_t nal_unit_type = (fInputBuffer[1]&0x7E)>>1;fInputBuffer[0] = (fInputBuffer[1] & 0x81) | (49<<1); // Payload header (1st byte)fInputBuffer[1] = fInputBuffer[2]; // Payload header (2nd byte)fInputBuffer[2] = 0x80 | nal_unit_type; // FU header (with S bit)}//fu 分片时 将fu indicator 一起复制memmove(fTo, fInputBuffer, fMaxSize);fFrameSize = fMaxSize;fCurDataOffset += fMaxSize - 1;fLastFragmentCompletedNALUnit = False;}} else { // case 3  继续分片// We are sending this NAL unit data as FU packets.  We've already sent the// first packet (fragment).  Now, send the next fragment.  Note that we add// "NAL header" and "FU header" bytes to the front.  (We reuse these bytes that// we already sent for the first fragment, but clear the S bit, and add the E// bit if this is the last fragment.)unsigned numExtraHeaderBytes;if (fHNumber == 264) {fInputBuffer[fCurDataOffset-2] = fInputBuffer[0]; // FU indicatorfInputBuffer[fCurDataOffset-1] = fInputBuffer[1]&~0x80; // FU header (no S bit)numExtraHeaderBytes = 2;} else { // 265fInputBuffer[fCurDataOffset-3] = fInputBuffer[0]; // Payload header (1st byte)fInputBuffer[fCurDataOffset-2] = fInputBuffer[1]; // Payload header (2nd byte)fInputBuffer[fCurDataOffset-1] = fInputBuffer[2]&~0x80; // FU header (no S bit)numExtraHeaderBytes = 3;}unsigned numBytesToSend = numExtraHeaderBytes + (fNumValidDataBytes - fCurDataOffset);if (numBytesToSend > fMaxSize) {// We can't send all of the remaining data this time:numBytesToSend = fMaxSize;fLastFragmentCompletedNALUnit = False;} else {// This is the last fragment:fInputBuffer[fCurDataOffset-1] |= 0x40; // set the E bit in the FU headerfNumTruncatedBytes = fSaveNumTruncatedBytes;}memmove(fTo, &fInputBuffer[fCurDataOffset-numExtraHeaderBytes], numBytesToSend);fFrameSize = numBytesToSend;fCurDataOffset += numBytesToSend - numExtraHeaderBytes;}if (fCurDataOffset >= fNumValidDataBytes) {// We're done with this data.  Reset the pointers for receiving new data:fNumValidDataBytes = fCurDataOffset = 1;}// Complete delivery to the client://每次拷贝一次数据包就归还一次FramedSource::afterGetting(this);}
}

分帧

void MPEGVideoStreamFramer::doGetNextFrame() {fParser->registerReadInterest(fTo, fMaxSize);continueReadProcessing();
}

从上边可以看到在sink端要获取一帧数据时,它调用H264or5Fragmenter的doGetNextFrame(),H264or5Fragmenter调用MPEGVideoStreamFramer的doGetNextFrame(),MPEGVideoStreamFramer使用它所关联的H264or5VideoStreamParser读取数据。
H264or5VideoStreamParser::parse()详解参见
H264or5VideoStreamParser::parse()详解
parse() 完成h264帧的分帧工作。
FramedSource::getNextFrame() 在获取到一个帧后都会调用void MultiFramedRTPSink
::afterGettingFrame()

void FramedSource::afterGetting(FramedSource* source) {source->nextTask() = NULL;source->fIsCurrentlyAwaitingData = False;// indicates that we can be read again// Note that this needs to be done here, in case the "fAfterFunc"// called below tries to read another frame (which it usually will)if (source->fAfterGettingFunc != NULL) {(*(source->fAfterGettingFunc))(source->fAfterGettingClientData,source->fFrameSize, source->fNumTruncatedBytes,source->fPresentationTime,source->fDurationInMicroseconds);}
}
void MultiFramedRTPSink::afterGettingFrame1(unsigned frameSize, unsigned numTruncatedBytes,struct timeval presentationTime,unsigned durationInMicroseconds) {if (fIsFirstPacket) {// Record the fact that we're starting to play now:gettimeofday(&fNextSendTime, NULL);}fMostRecentPresentationTime = presentationTime;if (fInitialPresentationTime.tv_sec == 0 && fInitialPresentationTime.tv_usec == 0) {fInitialPresentationTime = presentationTime;}    if (numTruncatedBytes > 0) {unsigned const bufferSize = fOutBuf->totalBytesAvailable();envir() << "MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for our buffer size ("<< bufferSize << ").  "<< numTruncatedBytes << " bytes of trailing data was dropped!  Correct this by increasing \"OutPacketBuffer::maxSize\" to at least "<< OutPacketBuffer::maxSize + numTruncatedBytes << ", *before* creating this 'RTPSink'.  (Current value is "<< OutPacketBuffer::maxSize << ".)\n";}unsigned curFragmentationOffset = fCurFragmentationOffset;unsigned numFrameBytesToUse = frameSize;unsigned overflowBytes = 0;// If we have already packed one or more frames into this packet,// check whether this new frame is eligible to be packed after them.// (This is independent of whether the packet has enough room for this// new frame; that check comes later.)if (fNumFramesUsedSoFar > 0) {if ((fPreviousFrameEndedFragmentation&& !allowOtherFramesAfterLastFragment())|| !frameCanAppearAfterPacketStart(fOutBuf->curPtr(), frameSize)) {// Save away this frame for next time:numFrameBytesToUse = 0;fOutBuf->setOverflowData(fOutBuf->curPacketSize(), frameSize,presentationTime, durationInMicroseconds);}}fPreviousFrameEndedFragmentation = False;if (numFrameBytesToUse > 0) {// Check whether this frame overflows the packetif (fOutBuf->wouldOverflow(frameSize)) {// Don't use this frame now; instead, save it as overflow data, and// send it in the next packet instead.  However, if the frame is too// big to fit in a packet by itself, then we need to fragment it (and// use some of it in this packet, if the payload format permits this.)if (isTooBigForAPacket(frameSize)&& (fNumFramesUsedSoFar == 0 || allowFragmentationAfterStart())) {// We need to fragment this frame, and use some of it now:overflowBytes = computeOverflowForNewFrame(frameSize);numFrameBytesToUse -= overflowBytes;fCurFragmentationOffset += numFrameBytesToUse;} else {// We don't use any of this frame now:overflowBytes = frameSize;numFrameBytesToUse = 0;}fOutBuf->setOverflowData(fOutBuf->curPacketSize() + numFrameBytesToUse,overflowBytes, presentationTime, durationInMicroseconds);} else if (fCurFragmentationOffset > 0) {// This is the last fragment of a frame that was fragmented over// more than one packet.  Do any special handling for this case:fCurFragmentationOffset = 0;fPreviousFrameEndedFragmentation = True;}}if (numFrameBytesToUse == 0 && frameSize > 0) {// Send our packet now, because we have filled it up:sendPacketIfNecessary();} else {// Use this frame in our outgoing packet:unsigned char* frameStart = fOutBuf->curPtr();fOutBuf->increment(numFrameBytesToUse);// do this now, in case "doSpecialFrameHandling()" calls "setFramePadding()" to append padding bytes// Here's where any payload format specific processing gets done:doSpecialFrameHandling(curFragmentationOffset, frameStart,numFrameBytesToUse, presentationTime,overflowBytes);++fNumFramesUsedSoFar;// Update the time at which the next packet should be sent, based// on the duration of the frame that we just packed into it.// However, if this frame has overflow data remaining, then don't// count its duration yet.if (overflowBytes == 0) {fNextSendTime.tv_usec += durationInMicroseconds;fNextSendTime.tv_sec += fNextSendTime.tv_usec/1000000;fNextSendTime.tv_usec %= 1000000;}// Send our packet now if (i) it's already at our preferred size, or// (ii) (heuristic) another frame of the same size as the one we just//      read would overflow the packet, or// (iii) it contains the last fragment of a fragmented frame, and we//      don't allow anything else to follow this or// (iv) one frame per packet is allowed:if (fOutBuf->isPreferredSize()|| fOutBuf->wouldOverflow(numFrameBytesToUse)|| (fPreviousFrameEndedFragmentation &&!allowOtherFramesAfterLastFragment())|| !frameCanAppearAfterPacketStart(fOutBuf->curPtr() - frameSize,frameSize) ) {// The packet is ready to be sent nowsendPacketIfNecessary();} else {// There's room for more frames; try getting another:packFrame();}}
}
void MultiFramedRTPSink::sendPacketIfNecessary() {if (fNumFramesUsedSoFar > 0) {// Send the packet:
#ifdef TEST_LOSSif ((our_random()%10) != 0) // simulate 10% packet loss #####
#endifif (!fRTPInterface.sendPacket(fOutBuf->packet(), fOutBuf->curPacketSize())) {// if failure handler has been specified, call itif (fOnSendErrorFunc != NULL) (*fOnSendErrorFunc)(fOnSendErrorData);}++fPacketCount;fTotalOctetCount += fOutBuf->curPacketSize();fOctetCount += fOutBuf->curPacketSize()- rtpHeaderSize - fSpecialHeaderSize - fTotalFrameSpecificHeaderSizes;++fSeqNo; // for next time}if (fOutBuf->haveOverflowData()&& fOutBuf->totalBytesAvailable() > fOutBuf->totalBufferSize()/2) {// Efficiency hack: Reset the packet start pointer to just in front of// the overflow data (allowing for the RTP header and special headers),// so that we probably don't have to "memmove()" the overflow data// into place when building the next packet:unsigned newPacketStart = fOutBuf->curPacketSize()- (rtpHeaderSize + fSpecialHeaderSize + frameSpecificHeaderSize());fOutBuf->adjustPacketStart(newPacketStart);} else {// Normal case: Reset the packet start pointer back to the start:fOutBuf->resetPacketStart();}fOutBuf->resetOffset();fNumFramesUsedSoFar = 0;if (fNoFramesLeft) {// We're done:onSourceClosure();} else {// We have more frames left to send.  Figure out when the next frame// is due to start playing, then make sure that we wait this long before// sending the next packet.struct timeval timeNow;gettimeofday(&timeNow, NULL);int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec;int64_t uSecondsToGo = secsDiff*1000000 + (fNextSendTime.tv_usec - timeNow.tv_usec);if (uSecondsToGo < 0 || secsDiff < 0) { // sanity check: Make sure that the time-to-delay is non-negative:uSecondsToGo = 0;}// Delay this amount of time:nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, (TaskFunc*)sendNext, this);}
}

live555 的播放相关推荐

  1. 嵌入式监控【v4l2采集->vpu编码->live555推流】

    嵌入式监控[v4l2采集->vpu编码->live555推流] 文章目录 嵌入式监控[v4l2采集->vpu编码->live555推流] 介绍 数据流图 一.v4l2 1.1 ...

  2. 嵌入式监控【v4l2采集-vpu编码-live555推流】

    嵌入式监控[v4l2采集->vpu编码->live555推流] 文章目录 嵌入式监控[v4l2采集->vpu编码->live555推流] 介绍 数据流图 一.v4l2 1.1 ...

  3. live555 学习笔记-建立RTSP连接的过程(RTSP服务器端)

    live555 学习笔记-建立RTSP连接的过程(RTSP服务器端) 监听 创建rtsp server,rtspserver的构造函数中,创建监听socket,添加到调度管理器BasicTaskSch ...

  4. FFmpeg进阶: 音视频常用开源库

    文章目录 多媒体处理 FFmpeg Gstreamer libav 流媒体传输 WebRTC Live555 开源播放器 ijkplayer exoplayer vlc 编解码 av1 vp8.vp9 ...

  5. windows下live555+rtsp+ffmpeg媒体源,nginx+rtmp转发服务器,vlc播放rtmp媒体流

    1.下载live555+ffmpeg视频文件作为媒体源 将视频文件me-like-yuh.ts和ffmpeg推流脚本ffmpeg-rtsp2rtmp.bat放在mediaserver目录下 ffmpe ...

  6. live555 源码分析:播放启动

    本文分析 live555 中,流媒体播放启动,数据开始通过 RTP/RTCP 传输的过程. 如我们在 live555 源码分析:子会话 SETUP 中看到的,一个流媒体子会话的播放启动,由 Strea ...

  7. live555编译、播放示例

    最近被安排搞onvif,onvif的视频传输,就是使用live555做服务器,使用其提供的URL.所以live555也得去了解学习.本文简单介绍live555的编译,然后在原有例程上给出一个示例. 1 ...

  8. live555作为RTSP流媒体服务器RTSPServer时解决对接海康NVR时G711音频不能正常播放的问题

    live555作为NVR内置的流媒体服务器RTSPServer在对接海康NVR,视频正常,音频不能正常播放, 但VLC可以正常播放. 经过问题的分析,发现live555作为NVR流媒体服务器输出视频为 ...

  9. RTSP H264播放器(基于live555、ffmpeg、d3d应用)

    最近由于要方便测试流媒体服务器的性能,基于live555.ffmpeg.d3d等开发了一款rtsp h264播放器.当然,只是为了测试,可能会有一些bug,欢迎大家交流. 群:219128816 有需 ...

最新文章

  1. ISA Server 2006的CARP与NLB的构建
  2. RUP大讲堂(第一讲):RUP简介及软件过程改进
  3. Windows-Server下加强系统安全性系列之方案【九】
  4. Ajax原理以及优缺点
  5. matlab图片矢量化,matlab图形矢量化解决方案
  6. 工作占用了太多私人时间_职晓|如果工作占用了生活时间,我应不应该辞职?...
  7. iOS - Rac(ReactiveCocoa)
  8. python安装库失败cannot determine archive_pip 无法安装 pip
  9. HttpClient 学习整理(转)
  10. vb对数据库操作用存储过程
  11. python的运行方式_Python的两种运行方式
  12. Android Studio 中集成 ASSIMP
  13. 丹佛大学 电子与计算机学院,丹佛大学商业智能硕士.pdf
  14. kubeadm部署1.11.1的k8s集群
  15. 基于中科院-CASIA-GaitDatasetB步态图像轮廓数据库的步态周期检测与步态角度特征MATLAB源码介绍
  16. webworker应用场景_聊聊webWorker
  17. 全自动与半自动手表的区别_全自动和半自动机械表的区别?
  18. 【算法】leetcode887鸡蛋掉落题之方法二解析
  19. 手机如何测光照度_另类玩法:手机变身照度计
  20. 美国大选2020推特相关数据

热门文章

  1. 作图神器Graphviz——学习数据结构的好帮手
  2. 为什么Edge一打开就跳转2345网页首页?
  3. spympy poly模块
  4. IDEA中JDBC连接数据库无法加载驱动问题
  5. WorkPlus Lite,免费开源的即时通讯+企业移动平台
  6. 页面置换 FIFO LRU OPT 算法在不同内存容量下的命中率(JAVA实现)
  7. 国内研究团队提出通过非侵入性脑机超表面平台实现人脑直接无线通信
  8. Azure 和Bing Maps API 示例经验分享
  9. win10子系统ubuntu16安装mysql无法启动
  10. 尝试寻找一些合作伙伴,产品相关培训咨询服务介绍(2B/支持在线)