konodas

joined 1 year ago
[–] konodas@feddit.de 0 points 1 year ago* (last edited 1 year ago)

Tattoos are associated with criminals, particularly the yakuza, afaik.

[–] konodas@feddit.de 0 points 1 year ago (1 children)

Wenn wir von lobbyismus reden, benutzen wir scheinbar unterschiedliche definitionen und reden daher aneinander vorbei. Mir scheint: wenn du ein lobbyismus-verbot forderst, hast du eine bestimmte, moralisch fragwürdige ausprägung von lobbyismus im kopf. Ich würde an dieser stelle gerne auf den entsprechenden wikipedia-artikel verweisen.

Ein paar denkanstöße:

  • wer entscheidet, ob ein berater "objektiv" ist?
  • wie ermittelt man einen "angemessenen lohn" (ohne markt)?
  • wieso sollten politiker nicht reich (millionäre o.ä.) werden können? Würde sie das nicht weniger bestechlich machen, und die "besten" motivieren diesen karriereweg einzuschlagen?
[–] konodas@feddit.de 0 points 1 year ago (3 children)

Lobbyismus, welcher ja nicht nur von Unternehmen betrieben wird, sondern von allen möglichen Gruppen (Greenpeace, Verbraucherschutzorganisationen, andere NGOs etc.), zu verbieten, ergibt mmn. wenig Sinn. Irgendwoher müssen Politiker Informationen und Argumente beziehen auf deren Basis sie Entscheidungen treffen.

Wichtiger wäre hier mmn.Transparenz.

[–] konodas@feddit.de 0 points 1 year ago

Which premise? Could you elaborate?

[–] konodas@feddit.de 0 points 1 year ago (2 children)

I would argue that we also know how brains work on a physical/chemical level, but that does not mean that we understand how they work on a system level. Just like we know how NNs work on a mathematical level, but not on a system level.

When someone claims that some object does not have a certain property, I would expect them to define what the necessary conditions for this property are, and then show that these conditions are not satisfied by the object.

As far as I know, the current consensus hypothesis is that sentience/consciousness emerges from certain patterns of information processing. So, one would have to show that the necessary kind of information processing is not happening in some object. One can argue that dead brains are not conscious, as there is not information processing going on at all. However, as it is unknown what kind of information processing is necessary for consciousness to arise, you can currently not exactly define the necessary conditions (beyond "there has to be some information processing"), and therefore not show that NNs do not fulfill these conditions. So, I think it is difficult to be "certain".

[–] konodas@feddit.de 0 points 1 year ago

He did not claim that shorting caused the 08 crash, or am i missing something?

According to "the big short", the reason was that banks gave loans to people who could not really afford them in case of an unexpected drop in the housingmarket (mortage backed, as you say), bundled the loans into packages, went to rating agencies who gave best ratings for the packages, sold them to other institutions and then shorted them when they noticed that the market unexpectedly dropped, knowing people would not be able to pay back the loans in the packages. Which was completely reasonable, just somewhat unethical.

So, i think you could say it was an error of the rating agencies, as they underestimated the risk of a drop in the housing market when giving out the rating.

[–] konodas@feddit.de 0 points 1 year ago

I enjoyed it. Linear algebra and optimization are treated much more in depth compared to MML. IIRC he then goes to linear regression and derives most other models from there, which is an interesting perspective.

 

Ich sitze überlicherweise in einem IKEA-Lehnstuhl, mir ist jedoch aufgefallen, dass ich davon scheinbar nach einer weile Rückenschmerzen bekomme. Nun wollte ich mich nach einer Alternative umsehen, z.B. einem Sitzsack.

Worin/worauf sitzt oder liegt ihr beim lesen?

[–] konodas@feddit.de 0 points 1 year ago (4 children)

What we know for certain is that Bing, ChatGPT, and other language models are not sentient

I wonder how we can "certainly" know that.

[–] konodas@feddit.de 0 points 1 year ago* (last edited 1 year ago)

I second that. Being able to test medium sized models locally can make debugging much easier.

I have a 3070 with 8GB VRAM, which can train e.g. a GPT2 with a batch-size of 1 with full precision.

[–] konodas@feddit.de 0 points 1 year ago* (last edited 1 year ago) (2 children)

Pattern Recognition and Machine Learning from Bishop is really good, imho. Its relatively math-heavy, so depending on your skill, reading Mathmatics for ML or Linear algebra and optimization for ML by c. aggarwal might be a good idea.

[–] konodas@feddit.de 0 points 1 year ago

Ja, das stimmt wohl.

[–] konodas@feddit.de 0 points 1 year ago

Am besten analog, z.b. auf einem blatt papier.

view more: next ›