nonentity

joined 2 months ago
[–] nonentity@sh.itjust.works 2 points 1 day ago

I was going to suggest digging up, but you’re on track to reach orbit if you maintain tunnelling on this bearing and momentum.

[–] nonentity@sh.itjust.works 1 points 1 day ago (3 children)

It’s a ballistic bovine which makes a ‘whoosh’ sound while overhead.

[–] nonentity@sh.itjust.works 20 points 3 days ago (5 children)

Nut milk comes from male cows.

[–] nonentity@sh.itjust.works 6 points 4 days ago

I call them compensators.

Their size is inversely proportional to the capacity of the user’s head.

Some are big enough to replace both.

[–] nonentity@sh.itjust.works 7 points 6 days ago

Schizophrenic, Bronze Age, fan fiction has a lot to answer for.

[–] nonentity@sh.itjust.works 6 points 1 week ago (2 children)

King Arthur came a lot.

[–] nonentity@sh.itjust.works 4 points 1 week ago (1 children)

I thought I explained how to handle the dynamically inserted ads, but I’ll elaborate a little here.

If your Listenarr instance is part of a broader network of other instances, they’ll all potentially receive a unique file with different ads inserted, but they’ll typically be inserted at the same cut location in the program timeline. Listenarr would calculate the hash of the entire file, but also sub spans of various lengths.

If the hash of the full file is the same among instances, you know everyone is getting the same file, and any time references suggested for metadata will apply to everyone.

If the full file hash is different, Listenarr starts slicing it up and generating hashes of subsections to help identify where common and variant sections are. Common sections will usually be the actual content, variants are likely tailored ads. The broader the Listenarr network, the greater the sample size for hashes, which will help automate identification. In fact, the more granular and specific the targeting of inserted ads, the easier it will be to identify them.

Once you have the file sections sufficiently hashed, tagged, and identified, you can easily stitch together a sanitised media stream into a file any podcast app can ingest.

You could shove this function into a podcast player, but then you’d need to replicate all the existing permutations of player applications.

The beauty of the current podcast environment is it’s just RSS feeds that point to audio files in a standard way. This permits handling by a shim proxy in the middle of the transaction between the publisher and the player.

This could also be a way to better incorporate media into the fediverse. One example is the chapters and transcripts generated could be directly referenced in Lemmy and Mastodon posts.

[–] nonentity@sh.itjust.works 19 points 1 week ago (3 children)

I think this would make a good -arr application.

Ingest podcast feeds, crowdsource hashes of whole and partial sections of the downloaded audio, which should be a good start to auto-tag dynamically inserted ads.

For non-dynamic ads, provide an interface to manually identify their start/end, and publish for others. The same interface could be used to add chapters and other metadata.

Then you’d just point your podcast app to an RSS feed you self host.

I propose Listenarr, unless this has already been taken.

[–] nonentity@sh.itjust.works 1 points 1 week ago

Nationalist patriotism is a religion that worships dirt.

[–] nonentity@sh.itjust.works 5 points 2 weeks ago

Most of their work of late has been movie and TV soundtracks.

I’ve been a fan for over 30 years so am heavily biased, but I can’t name a miss from the scores Trent and Atticus are responsible for.

[–] nonentity@sh.itjust.works 3 points 3 weeks ago

Navigation is handled by Cetaceans.

view more: next ›