Talk:Motion JPEG
This article has not yet been rated on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Encoding section
[edit]Quote:"Motion JPEG uses a lossy form of intraframe compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the time domain into the frequency domain (aka transform domain.)" I believe an error has crept in. Just like JPEG image compression the DCT should be converting from the spatial position domain into the spatial frequency domain within the single image. If it's really time-domain to temporal frequency conversion it contradicts what I know of MJPEG and contradicts the statement that each JPEG frame image is encoded independent of adjacent frames and that this part of the encoding is intraframe compression. Perhaps someone who actually knows for sure could make the correction as my knowledge is theoretical and I lack specific detail on MJPEG. Dynamicimanyd (talk) 19:25, 22 March 2011 (UTC)
Digital Cameras section
[edit]Through heavy use of past tense this section implies that MJPEG in digital cameras is of historical interest only and no longer used by these cameras. This is not true as new cameras are still being released with MJPEG movie facilities and I feel this section should be reworded. This section also states that such cameras only record at up to 320x240 at a maximum of 15 fps - again this is untrue as resolutions such as 640x480 at 30 fps are now common. Cpc464 (talk) 11:42, 16 June 2008 (UTC)
A comment on the 'criticisms' section: the first bullet stresses the existence of MJPEG "compatibility concerns", while in the last bullet, "broad compatibility" is attributed to the format. Would the originator of the text - or an expert in the area - please clarify this contradiction? Previously Used Coffins (talk) 20:07, 4 June 2008 (UTC)
Is anyone else confused by these statements?
"The resulting quality of intraframe video compression is independent from the motion in the image which differs from MPEG video where quality often decreases when footage contains lots of movement."
"Prior to the recent rise in MPEG-4 encoding in consumer devices, a progressive scan form of MJPEG also saw widespread use in e.g. the "movie" modes of Digital Still Cameras, allowing video encoding and playback through the integrated JPEG compression hardware with only a software modification. Again, the resultant quality is markedly reduced compared to MPEG compression at a similar bitrate, particularly as sound (when included) was often uncompressed PCM or low-compression (and low processor-demand) ADPCM."
I read the first one to say that MJPEG can offer better quality than MPEG, particularly when there's a lot of motion. The second statement seems to contradict that. How can it say "Again, the resultant quality is markedly reduced..." when there was nothing stated to that effect prior in the article?
—The preceding unsigned comment was added by SalsaShark42 (talk • contribs) .
- The first statement was misleading. I changed it. It did seem to give the impression that MJPEG provided better quality than MPEG when there's a lot of motion, but that's an incorrect impression. The degree of compression superiority of older MPEG technology (MPEG-1, for example) over JPEG was primarily the result of interframe prediction. If you don't use the interframe prediction, you get about the same compression capability as JPEG. Not worse. Just not better. In a way it's sort of like saying that if you don't compress the video pictures at all and just use PCM instead you get the advantage that the quality doesn't vary as a function of the picture's spatial frequency content. Maybe the compression is not varying, but it's not very good either. Or maybe it's like saying that if you stick your money under your mattress instead of in a savings account then you have the advantage that your return rate (of zero percent) is independent of the market interest rate fluctuations. -Mulligatawny 23:47, 26 September 2006 (UTC)
- Further to these comments on the Critisms section ... maybe it needs to be counterpointed with an "advantages" section (though I'm loath to try painting this format as objectively "good")? EG it requires only relatively simple encode/decode hardware and processing/battery power, it's simple to pull individual frames out as still images (of questionable but consistent quality), it allows frame-accurate editing with no re-encoding (just make sure you have a high original data rate for good results), etc. The critisms there are valid, but it's rather one sided at the moment. 77.102.101.220 (talk) 22:37, 6 November 2009 (UTC)
can anyone tell me the best way to save Motion Jpeg video files as MP4 files ? When I play these files they look fine but when I try to save them they break up and look very soft. —The preceding unsigned comment was added by Photogold (talk • contribs) .
It is stated that there are multiple formats of MJPEG, but did not mention about where to find specifications for them ? (unsigned)
"These features are not to be undervalued, as many applications in high framerate real time image processing require a stable open-source solution that can be easily implemented with a minimal impact on the processor load.[citation needed]".
Not a citation but a little explanation. Video non-linear editing systems often need to decode the video in the CPU rather than in the GPU because the GPU is set up to display frames rather than process them. They also often need to decode 2 video streams simultaneously to implement simple effects like crossfade and wipe. Decoding a single HD stream can severely tax a CPU. Decoding two HD video streams plus whatever processing is done on them is a lot of work for the CPU. Another consideration for video non-linear editing (NLE) system, and some other applications, that this article fails to take into account is that if a system needs to skip frames to keep up in real time, a sistem using interframe compression could depend on frames that were thrown away. In a program like cinelerra, the plugins may need preceeding frames for each frame output and if you need even more frames to reconstruct those frames due to the codec, then the number of frames required can be very high. Even if you try to synchronize non-skipped frames with the full frames in the original stream, the need by effects plugins for preceeding frames can thwart that. If you have a repeating pattern of one full frame followed by 9 incremental frames, then displaying every other frame still requires decoding every frame. Displaying every 5th frame requires decoding the first 6 frames out of each group so you find yourself decoding 60% of frames instead of 20%. If you apply a plugin that needs the previous frame for each frame processed, then you need to decode all 10 frames (100%) even when displaying every fifth because you need to decode the tenth frame to get the preceeding frame for the 11th and decoding the tenth requires decoding all 9 preceeding frames. Thus an encoding format where each frame can be decoded independently of the others is an advantage. Whitis (talk) 04:40, 30 April 2008 (UTC)
History
[edit]Would a date of first use/implementation/distribution be appropriate? I vaguely recall that QuickTime 2.0 introduced playback and encoding support. I have no date for that, maybe 1991? Though at the time hard drives were so small you wondered why you'd use it. Dlamblin (talk) 16:08, 18 March 2010 (UTC)
Just a quick note
[edit]The Cambozola mention in the page links to some cheese wiki entry instead of the actual Java applet wiki (if there is one) or website. I found it funny, but... Just thought I'd bring that up. —Preceding unsigned comment added by 98.225.255.118 (talk) 06:12, 28 May 2009 (UTC)
Sample video
[edit]I couldn't find a sample video, so I created one and put it up at http://jjc.freeshell.org/turning_pages.html Feel free to use if you need a public domain video. It works in Apple Quicktime and Fedora's totem. It's not very exciting, just me turning pages in a 1922 book. Jrincayc (talk) 03:07, 25 May 2009 (UTC)
Sample video doesn't work — Preceding unsigned comment added by 83.244.151.122 (talk) 09:26, 3 February 2016 (UTC)
Laymen's terms
[edit]In laymen's terms, quantization is a method for optimally reducing a large numberscale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the over picture than other coefficents, are characteristically small-values with high compressibility.
What part of all this is in laymen's terms? I propose re-writing or removing "In laymen's terms". --Psignoret (talk) 01:10, 22 April 2011 (UTC)
Not being a container
[edit]Unlike e.g. Motion JPEG 2000 (and common video formats) that permits the carriage of audio, the older (and incompatible with) Motion JPEG doesn't code any audio, as it's simply a concatenation of still JPEG frames.[1] In a suitable container format, e.g. AVI, audio can however be provided.
This is missleading. It is highly uncommon that a video format is also a container format. In fact it's the norm that you pack AV1/H.264/H.265 in a suitable container such as MKV,AVI,TS,MP4,... I would remove the whole paragraph from the introduction. It is not relevenat to MJPEG that MJPEG2000 also defines a container for audio. --84.112.125.136 (talk) 12:05, 23 August 2020 (UTC)
- I agree the paragraph is misleading (the "and common video formats" part is simply wrong, as no common video coding format includes audio as part of its specification), plus it uses an anonymous discussion at Stack Overflow as a reference, which is not a reliable source. So I removed the paragraph.—J. M. (talk) 15:36, 23 August 2020 (UTC)
Phabricator ticket
[edit]There’s a phabricator ticket for this to be enabled on Wikimedia sites here: http://phabricator.wikimedia.org/T159885 Victor Grigas (talk) 14:43, 9 November 2023 (UTC)