lets_get_off_lemmy

joined 10 months ago
[–] lets_get_off_lemmy@reddthat.com 16 points 2 months ago* (last edited 2 months ago)

I'm an AI researcher and yes, that's basically right. There is no special "lighting mechanism" portion of the network designed before training. Just, after seeing enough images with correct lighting (either for text to image transformer models or GANs), it will understand what correct lighting should look like. It's all about the distribution of the training data. A simple example is this-person-does-not-exist.com. All of the training images are high resolution, close-up, well-lit headshots. If all the training data instead had unrealistic lighting, you would get unrealistic lighting out. If it's something like 50/50, you'll get every part of that spectrum between good lighting and bad lighting at the output.

That's not to say that the overall training scheme of especially something like GPT-4 doesn't include secondary training operations for more complex tasks. But lighting of images is a simple thing to get correct with enough training images.

As an aside, I said that website above is a simple example, but I remember less than 6 years ago when that came out and it was revolutionary, so it's crazy how fast the space has moved forward in such a short time.

Edit: to answer the multiple subjects question: it probably has seen fewer images with multiple subjects and doesn't have enough "knowledge" from it's training data to accurately apply lighting in those scenarios. And you can imagine lighting is more complex in a scene with more subjects so it's more difficult for the model to use a general solution it's seen many times to fit the more complex problem.

[–] lets_get_off_lemmy@reddthat.com 32 points 2 months ago

Hahaha, as someone that works in AI research, good luck to them. The first is a very hard problem that won't just be prompt engineering with your OpenAI account (why not just use 3D blueprints for weapons that already exist?) and the second is certifiably stupid. There are plenty of ways to make bombs already that don't involve training a model that's an expert in chemistry. A bunch of amateur 8chan half-brains probably couldn't follow a Medium article, let alone do ground breaking research.

But like you said, if they want to test the viability of those bombs, I say go for it! Make it in the garage!

[–] lets_get_off_lemmy@reddthat.com 0 points 2 months ago (1 children)

I don't think it's lane surfing if you're not changing lanes. Anyway, this comment section has made me realize that it always just depends. Drive aware, keep safe distance, don't unnecessarily change lanes, let people pass (on the left) if they're going faster than you, etc.

The best advice I ever got about driving was "be predictable." I think if anyone really takes that to heart empathetically then it would be safer.

[–] lets_get_off_lemmy@reddthat.com 0 points 2 months ago (3 children)

Nah, if it's in the city (or in a small town with 4 lane roads and low speed limits), you'll see semis use the left lane for the same reason I do: the right lane stops a lot due to right turns.

[–] lets_get_off_lemmy@reddthat.com 1 points 2 months ago (1 children)

😤😮‍💨 Everytime I try to sell something on OfferUp

[–] lets_get_off_lemmy@reddthat.com 0 points 2 months ago (1 children)

Is this not deployed already? If it isn't, what the heck are we doing?

[–] lets_get_off_lemmy@reddthat.com 2 points 2 months ago (1 children)

I just do not understand how anyone is on the fence about DJT... Like, they see this conviction and that's what changes their mind? After everything else?

This looks great! I imagine the documents you upload are used for RAG?

If so, do you also show citations in the chat answers for what context the model used to answer the user's query?

I ask because Verba by weaviate does that, but I like yours more and I'd like to switch to it (I've had a hard time getting Verba to work in the past).

[–] lets_get_off_lemmy@reddthat.com 0 points 2 months ago (2 children)

Join the club.

view more: ‹ prev next ›