this post was submitted on 14 Jul 2024
35 points (94.9% liked)

Linux

48067 readers
686 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I'm trying to extract the frames of a video as individual images but it's really slow, except when I'm using jpeg. The obvious issue with jpegs is the data loss from the compression, I want the images to be lossless. Extracting them as jpegs manages about 50-70 fps but as pngs it's only 4 fps and it seems to continue getting slower, after 1 minute of the 11 minute video it's only 3.5 fps.

I suspect it's because I'm doing this on an external 5tb hard drive, connected over USB 3.0 and the write speed can't keep up. So my idea was to use a different image format. I tried lossless jpeg xl and lossless webp but both of them are even slower, only managing to extract at about 0.5 fps or something. I have no idea why that's so slow, the files are a lot smaller than png, so it can't be because of the write speed.

I would appreciate it if anyone could help me with this.

top 26 comments
sorted by: hot top controversial new old
[–] [email protected] 14 points 3 months ago (1 children)

Honestly I don't know, but it seems to me like extracting every single frame of a video as a lossless PNG is only really something that's necessary if you're trying to archive something or do frame by frame restoration. Either way, it is something that you hopefully aren't doing every day, so why not just let it run overnight & move on?

Otherwise ask yourself if you can settle with just extracting a single clip/section, or what's actually wrong with lossy jpeg with a low -qscale:v (high quality) - start around 5 and work down until you visually can't see any difference

[–] [email protected] 3 points 3 months ago (2 children)

I'm doing this to upscale and interpolate the video and I want the best quality possible, since the source is using h.264 and I'm exporting to AV1. I was using jpeg with qscale:v 0 and 100% quality but you could still see compression artifacts, which is why I want to use a lossless format now. The upscaling and interpolation also takes quite a lot of time, so I'm also trying to minimize the time each step takes, if possible, since I'll be doing this with multiple videos and I'll probably use these scripts I made in the future a few more times.

[–] [email protected] 4 points 3 months ago (1 children)

Have you verified that they're actually new jpeg artifacts, not just the h264 artifacts?

[–] [email protected] 2 points 3 months ago

Yes, I compared it to the same frame exported as a png

[–] [email protected] 2 points 3 months ago (1 children)

Have you considered using av1an? it supports vaporsynth which has a large amount of upscale and frame interpolation tools, AI or not. If your upscaler supports vapoursynth, it could be a lot better option.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago)

I use upscayl-ncnn (basically just the cli version of Upscayl) and it doesn't support vapoursynth. I've heard of it before but I don't really know what it is or how to use it.

[–] [email protected] 6 points 3 months ago (1 children)

It probably becomes CPU limited with those other compression algorithms.

You could use something like atop to find the bottleneck.

[–] [email protected] 3 points 3 months ago (1 children)

Yeah, that's the probably the case for those. I looked at CPU usage when using webp and one CPU core was always at 100%. Even tough it seems to not be able to use multiple cores, that's still really slow, no? Or is that normal?

Also, my CPU is a Ryzen 5 3600, just to get an idea of what performance would be expected.

[–] [email protected] 2 points 3 months ago (1 children)

My first thought was similar - there might be some hardware acceleration happening for the jpgs that isn't for the other formats, resulting in a CPU bottleneck. A modern harddrive over USB3.0 should be capable of hundreds of megabits to several gigabits per second. It seems unlikely that's your bottleneck (though you can feel free to share stats and correct the assumption if this is incorrect - if your pngs are in the 40 megabyte range, your 3.5 per second would be pretty taxing).

If you are seeing only 1 CPU core at 100%, perhaps you could split the video clip, and process multiple clips in parallel?

[–] [email protected] 1 points 3 months ago

At this point I'm very sure that the drive speed is actually the bottleneck. I'm not sure why it's so slow tho. Splitting it is an interesting idea, maybe it's also possible to tell ffmpeg to only extract every 6th frame and start at a different frame for each of the 6 cores.

[–] [email protected] 5 points 3 months ago* (last edited 3 months ago) (1 children)

A) Export using a lower effort, with libjxl effort 2 or something will be fine.

B) Export to a faster image format like QOI or TIFF or PPM/PNM etc.

PNG, JXL, WEBP, all have fairly high encode times by default with ffmpeg. lower the effort or use a faster format

If you think that it really could be write speed limitations, encode to a ramdisk first then transfer if you have the spare ram, but using a different and faster format will probably help as PNG is still very slow to encode. (writing to /tmp is fine for this)

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (1 children)

A) I actually didn't know about this before, do you know what option I need to use in ffmpeg to set the effort?

B) I tried those but it's the same issue as with png, that the hard drive's write speed is too slow (or it's the USB 3 connection but the result is the same)

Edit: Just found out how to set the effort. Setting it to 1 is quite a bit faster but still slow at only 3.8 fps.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (1 children)

what are your system specs? at a low effort you should be getting a lot more FPS, what cli command are you using? but I guess it would be best for you to export to /tmp given enough ram and then go from there

EDIT: for context, when encoding libjxl I would do -distance 0 -effort 2 for lossless output

[–] [email protected] 2 points 3 months ago

I have a Ryzen 5 3600. My command was ffmpeg -i video.mp4 -threads 12 -distance 0 -effort 1 extract/%06d.jxl.

[–] [email protected] 2 points 3 months ago (2 children)

PNG is a rather slow algorithm based on the DEFLATE compression from zip/gzip. You could extract to bmp or some other uncompressed format. First, to ensure it is lossless, make sure it supports the video's pix_fmt without needing conversion.

[–] [email protected] 2 points 3 months ago (1 children)

Using bmp has the same bottleneck as png, which is the write speed of the hard drive

[–] [email protected] 5 points 3 months ago (1 children)

Well, you found your problem then. You will need to get a decent quality SSD to speed it up. Avoid those cheap QLC SSDs, they are slower than mechanical hard drives once the SLC cache fills up.

[–] [email protected] 0 points 3 months ago

I don't really wanna buy another SSD just for this. I already have two SSDs in my PC, I just don't have enough storage left. All the frames are gonna be like 300gb.

[–] [email protected] 1 points 3 months ago

going from YUV->RGB wont incur any meaningful loss, going from RGB -> YUV on the other hand can, but it's rare that it will actually happen so long as you arent messing up your bitdepth too much

[–] [email protected] -1 points 3 months ago (1 children)

I'll bet with mpeg to jpeg it doesn't have to re-encode the image, which it's doing with the other formats.

[–] [email protected] 4 points 3 months ago (1 children)

h.264 (the compression algorithm the video uses) and jpeg are entirely different, so it does have to re-encode

[–] [email protected] 0 points 3 months ago (1 children)

Actually they both use Discrete Cosign Transform!

PNGs use DEFLATE which is a generic compression standard that exhaustively searches for smaller ways to compact the data.

I would recommend comparing the quality of images of different formats against eachother to see if there is noticeable lossyness.

If the PNGs are indeed better, try to set the initial compression of the PNGs to "zero" and come back later to "crush" them smaller.

[–] [email protected] 1 points 3 months ago (1 children)

Even if they use the same technique, they're entirely different algorithms and h.264 also takes information from multiple different frames, which is why the video is 1.7gb but a folder with each frame saved as a png is over 300gb.

The formats with the best compression, where it might be fine, are jpeg xl and webp, as far as I know. They're even slower tho because they're so CPU intensive and only use one thread.

Setting the png compression to 0 doesn't help because the bottleneck for png is the hard drives write speed. I already tried that.

[–] [email protected] 1 points 3 months ago

Yeah, that makes sense. There might be some useful interface in VAAPI?

[–] [email protected] -2 points 3 months ago (1 children)

PNG is a good format for graphics, lettering, logos... not photography so unless your video is some cartoons you're using png compression for something is not meant for.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago)

I agree that you're not really leveraging any features of PNG like you would using JPEG or RAW here, but saying it's not meant for this use is an odd way to phrase it. There's nothing inherently wrong with wanting lossless compression on an image...