My brand new Sandisk clip sport plus MP3 player

Well, MP3 players are a thing of the past I know that much, but i do appreciate the compact size and dedicated hardware so I got this one, it came with 32GB of storage, Here are my findings

1- You need to heat up and remove the glass before you remove the frame as the frame has 4 screws under the glass !

2- The storage is a simple 32GB SD card, i took it out, Used DD to copy it to a 128GB SD card, put it back in, then used the format function on the device itself and it works !

That’s it, you can expand this to 128GB (Probably more, but i can’t guarantee that), but again, the firmware of the device is on the SD card, so you need to duplicate it before you put the new card in.

Sunshine and moonlight

VNC and RDP are great and all, and for so many purposes, they are the goto solution for remoting into a machine.

Now, another solution which is great (And much better if you have the bandwidth) is to broadcast your screen video and do all the work on the server rather than the client

The solution used to be nvidia’s game stream, which was abandoned by nvidia, the new solution based on nvidia would be the sunshine (Server) and moonlight client

The sunshine+moonlight duo work on almost every platform I need, Windows, Mac, Android, iOS, Even LG TVs running web OS… in short, it is a more universal solution. You can even create a virtual non existent monitor under linux and stream that to a different device !

So, let us start with the server (Sunshine)

Installing sunshine on debian is very easy as a .deb installation file is provided, sunshine is not yet in the debian repositories, but if i understand the license correctly, it can be some time in the future

Now, go to the sunshine website, and download the deb file., in my case, I visit this webpage, and download the sunshine-debian-bookworm-amd64.deb file

Now, from the command prompt, su (to run as root), then cd to the directory where your deb file resides, then “sudo apt install ./sunshine-debian-bookworm-amd64.deb”, We should now have the server running and waiting to be opened in the web browser, Now, on the command line , type “sunshine”

Point a web browser to https://localhost:47990/, ignore the problem with self signed certificates, and set your username and password

Now, your debian computer is running a sunshine server, go to any other machine where you want to install the client (moonlight) from here , and connect to your server by its IP address.

You are done !

Creating images with AI (Programming)

So, I will start by dropping a few keywords here, and what they are about, that will probably help you start, I am compiling this as a guideline on where to start

  • VQGAN (Vector Quantized Generative Adversarial Network / neural network) : The software that generates the image
  • CLIP (Contrastive Language-Image Pre-training / neural network) : Software to influence a generated image based on input text (User prompt)
  • VQGAN+CLIP : Two neural network pieces of software that work in tandem.
  • CLIP-Guided-Diffusion: A technique for doing text-to-image synthesis cheaply using pre-trained CLIP and diffusion models.
  • Google colab notebook: A tool made by google where you can run python code and utilize google’s GPUs, both paid and free exist

Downloading video from youtube

This might sound complicated, but youtube videos come in many shapes, many of them have either video only, or audio only !

the best way to get a full resolution video is to download then combine video and audio !

But before we go there, youtube throttles the speed you download videos at ! so youtube-dl needs patching, an alternative would be yt-dlp (See here)

The easier way to install would be

apt install yt-dlp

Now, if you insist on pip, you can do the following

apt install python3-pip

python3 -m pip install --force-reinstall https://github.com/yt-dlp/yt-dlp/archive/master.tar.gz

Now, let us try downloading a video/audio

youtube-dl -F https://www.youtube.com/watch?v=mUvrLxaSolc

The line above will show you a bunch of options and their IDs, what you need to do now is to download the ones you need with a command such as

youtube-dl -f 270 https://www.youtube.com/watch?v=mUvrLxaSolc

Now, to combine them (Audio and video) without re-encoding…

ffmpeg -i ao.webm -i vo.webm -c:v copy -c:a copy output.webm

But remember when you download,

Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.

ffmpeg cheat sheet

First of all, after using yt-dlp to download 2 webm files for a video, you can combine them without reencoding using the following command (See here), Just make sure both files are VP8 or VP9

ffmpeg -i ao.webm -i vo.webm -c:v copy -c:a copy output.webm

To extract the section from the file resulting from the first command above

ffmpeg -ss 00:02:39.000 -i output.webm -t 00:00:05.000 -c copy out.mp4

Converting a file you have downloaded using 1 or any other file into MP4 (H264), since some windows computers will not play a webm file !

ffmpeg -i source264.mp4 -c:v libx265 -crf 28 -preset fast -c:a aac -b:a 128k  -filter:v fps=25 out265.mp4

If you want to cut a part of the video, without re-encoding it

ffmpeg -i input.mp4 -vcodec copy -acodec copy -ss 00:01:00.000 -t 00:00:02.000 output.mp4

nVidia (GPU) : Convert H264/H265

Nvidia graphics card to convert H265 (MKV) to H264 MP4, Using nvidia hardware encoder to encode video into H264 or H265 !

To checks whether hardware encoders are available or not, run the command

ffmpeg -encoders | findstr /ic:"NVIDIA"

If the following two lines are in the command, you can use the nVidia encoder, the first codec is H264, and the second is for H265 (HEVC)

V....D h264_nvenc           NVIDIA NVENC H.264 encoder (codec h264)
V....D hevc_nvenc           NVIDIA NVENC hevc encoder (codec hevc)

ffmpeg.exe -vsync 0 -hwaccel cuda -i <input_file> -map 0  -c:a copy -c:v h264_nvenc -pix_fmt yuv420p -preset hq <output_file>

-vsync : Synchronize video audio and metadata using the video timestamp
-hwaccel cuda : Use nVidia’s cuda for hardware accelleration

Example, Convert mkv to mp4 (Tested OK)

Now, with the above out of the way, the following command should encode your 1080P mkv H265 video to H264 ! all within GPU, so this re-encodes an nVidia compatible format to another nVidia compatible format, on my 1650 card, it was encoding at 12x. this provides better compatibility, if smaller file size and quality are what you seek, then you should do it the other way around

ffmpeg -i "1080p.mkv" -c:v h264_nvenc -pix_fmt yuv420p -minrate 500k -maxrate 1000k -c:a mp3 -b:a 128k "1080p.mp4"

The other way around

ffmpeg -i "1080p.mp4" -c:v hevc_nvenc -pix_fmt yuv420p -minrate 200k -maxrate 1000k -c:a aac -b:a 128k "1080p.mkv"

NOTE: The above does everything within the GPU, if for example you wanted the decoding on CPU, that will make things much slower because the decoded video (Huge) will still need to be copied to the GPU,

nVidia : DVD to H265

Now one very common task people want to execute is converting their old, bulky DVD collection to H265 (Or H264 if they value compatibility over size and clarity), DVD files are usually on a DVD in the Video_TS folder, and the AudioTS folder, So this will create a few cases

Case 1: Audio_ts folder is empty, Video TS folder has files that you know the order they should be displayed in, the objective is to put them all in one video file (Assuming MKV but the container is your choice), in this case, I usually start by converting all the videos to H265, then combine them, here, most of my videos are interlaced (that will be dealt with with yadif), and sometimes, they are files of different resolutions, so I will unify their size

ffmpeg -i "v1.VOB" -c:v hevc_nvenc -c:a aac -b:a 256k -vf yadif,scale=1920:1080 -x265-params "crf=22:min-keyint=25:keyint=50" -preset slow "d1.mkv"

Now, create a list of the files

(for %i in (*.mkv) do @echo file '%i') > mylist.txt

And concatenate the videos

ffmpeg -f concat -i mylist.txt -c copy output.mkv

Making a video smaller

A couple of hours ago, i received a video that is 50 frames per second, and compressed in H264, the video was 58MB, and she wanted it less than 15 to send it via email, the video was 1:45 long, so i re-encoded it in H-265 but she had a problem playing it (No codec), so i decided to re-encode it with VP9 (webm).

to arrive at a number less than 10, i needed to be encoding at around 1 MegaBIT per second, now, to do this, I made a 2pass encoding with ffmpeg as follows

ffmpeg -i source.mp4 -c:v libvpx-vp9 -b:v 1M -filter:v fps=25 -pass 1 -an -f null /dev/null && \
ffmpeg -i source.mp4 -c:v libvpx-vp9 -b:v 1M -filter:v fps=25 -pass 2 -c:a libopus out.webm

The first pass collects statistics about the source video in a text log file, the second pass encodes the new video, from the options above, i have taken the frame rate to 25fps (from 50), and instead of defining the crf, i simply told ffmpeg what the biterate I need is, which is 1Mbit per second (Every 8 seconds, 1 MBYTE)

The previous one, H-265 was done with the command

ffmpeg -i source264.mp4 -c:v libx265 -crf 28 -preset fast -c:a aac -b:a 128k  -filter:v fps=25 out265.mp4

the H265 was smaller due to the crf factor used, as well as the lower frame rate

webP is the new PNG

Superior in both Lossless compression, and Lossy compression, webp is the new image format by google

Already supported by all web browsers *(that i have tested it with), webP is indeed a promising format, so let us get to compressing our images

I have a big bunch of bitmaps that my scanner spits out (To avoid lossy jpeg compression the scanner’s driver produces), and i need them converted to lossless webp to save space (the first image I compressed went from 552MB bitmap to 183), that is 33% of the original size

So, under linux, this is how i would convert all BMPs into webp images, I think it is exectly the same on windows

on the command line, the command for compressing one image looks like

cwebp -lossless 00.bmp -o 00.webp

Now, the next step is to run them in a batch, copy the following text into a file and name it with the extension

Audio Video Libraries at home DLNA etc

This is basically a comparison of free sofwtare that you can install on your NAS or headless linux machine, I am using Debian Bullseye, but that is besides the point

To run webm files (VP9) through miniDLNA, all you need to do is rename the file !

My atom PC (D525) is not capable of transcoding on the fly, so i keep it, and i wrote a script to execute the following command for webm video files (and any other unsupported file) by browsing to the file in the web browser, then clicking on it, a simple script but explaining how to install it is going to take some times with permissions and server config, so i will probably be posting it online and explaining in a different post when i have the time

ffmpeg -y -i output.webm -c:v libx265 -b:v 2600k -x265-params pass=1 -an -f null /dev/null && \
ffmpeg -i input -c:v libx265 -b:v 2600k -x265-params pass=2 -c:a aac -b:a 128k TEMP_OUTPUT.mp4

the most popular of the bunch is MiniDLNA, it does not transcode, and on openWRT it does not even serve video, so it is a good option when you have a PC storing the files, and you don’t need transcoding on the fly ! for some devices that are capable of running VP9 like modern 4K TVs,all you need to do is rename the file !

Universal Media Server : Universal Media Server is a DLNA-compliant UPnP Media Server. It was originally based on PS3 Media Server by shagrath.

OSMC

Streama: Self hosted streaming media server.

Gerbera: Free UPnP media server based on MediaTomb

Bluetooth Audio Transmitter from 3.5 Audio Jack

I have an android TV box with no Bluetooth, installing Bluetooth dongles to it does not work, So i got a nice little Bluetooth transmitter that receives audio from a regular headphones jack, and transmits it to a Bluetooth speaker ! the YET-TX9 (Amazon for $19 but i have my doubts it is the best option)

Please note: As an Amazon Associate I earn from qualifying purchases such as the one above.

There is no user’s manual inside, the instructions are, as you can see, printed on the box.

Will come here and add the details as soon as i try it out

Watermarking Video with ffmpeg

In this post, I will be explaining how to watermark videos with a PNG image watermark that is transparent where it needs to be, I will cover both Linux and Windows (Not much is different on the ffmpeg side, the difference is when you want to traverse a directory (The script).

The watermark you see here is what I want to overlay over the video, If you right click and view the image, you should be able to see that around the text, it is transparent.

The PNG with transparency to be overlaid over the video

now, let us assume that the file in the directory is called x.mp4, and this watermark image is called watermark.png, then the following commands should overlay this image over the video

ffmpeg -i x.mp4 -i watermark.png -filter_complex "overlay=10:10" x1.mp4

The code above will create a new file (x1.mp4) which has the overlaid watermark, as you might be able to see if you execute the above the watermark is positioned at the top left corner of the video, which is not necessarily what you want, now because we know the dimensions of the watermark image, we can ask ffmpeg to center it horizontally (and if you like vertically to have it in the center of the image, but this is not what i want.

So let’s assume the video is full HD, meaning it has the dimensions 1920 x 1080 (Width x Height), and the image, as you can see has the dimensions (500 x 100), what i want here is to have the watermark centered horizontally and nudged down 100 pixels vertically, the code to do that would be

ffmpeg -i z.mp4 -i watermark.png -filter_complex "overlay=x=(1920-500)/2:y=100" z3.mp4

And in case this is not clear, here is a code to place it in the bottom right side of the screen

ffmpeg -i z.mp4 -i watermark.png -filter_complex "overlay=x=(1920-500):y=(1080-100)" z6.mp4

Now with the process of watermarking out of the way, How do we batch process videos under windows and under linux ?

Under Linux it is simple, I put all the input files in a directory named “in” and all the output is to be put in an directory called “out”, the shell script (batch file) is at the root where those 2 directories exist, the shell script is this

!/bin/bash
OIFS="$IFS"
IFS=$'\n'
for filename in "in/"*.mp4; do
ffmpeg -i "$filename" -i /apth_to_watermark/watermark.png -filter_complex "overlay=x=(1920-500):y=(1080-100)" "out/$(basename "$filename" .mp4).mp4"
done
IFS="$OIFS"

I have never been good at windows, so i looked around for a script to traverse a directory, I found some stuff, and here is my final result, if you can clean it up and make it more robust, please do leave me a comment and i will improve with your recommendations.