Actually Useful AI

1975 readers
1 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @sisyphean@programming.dev if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
26
27
 
 

I was aiming to use LLMs with robotics in an upcoming project, and needed to first verse myself in what is the current must-know techniques in the space. To that end I read a ton of papers and wrote this article to try and suss out the best parts of current state of the art.

I hope this helps people; I'd be thrilled to discuss much of this as well!

28
29
30
31
32
1
Jupyter AI (jupyter-ai.readthedocs.io)
33
34
 
 

You can find it here. Source code in the link.

It uses sumy to generate the summary.

35
36
 
 

I asked the AI and it says:

To access Claude 2 from Europe, you have several options[^1^][^2^][^3^]:

  1. Access Claude 2 directly: According to Mashable[^1^], all you need to do is click "Talk to Claude" on the Claude 2 website and provide an email address. After confirming the email address, you should be able to use Claude 2.

  2. Use a VPN: Cybernews[^3^] recommends using a VPN to access Claude 2 if you're outside the US or the UK. This is because Claude 2 is only officially available in those two countries. The VPN will allow you to connect to a server in the US or UK and access Claude 2 as if you were located in one of those countries. There are several VPNs that Cybernews recommends for use with Claude 2.

  3. Use the Opera browser with integrated VPN: According to a YouTube video[^2^], you can use the Opera browser with its integrated VPN to access Claude 2 from Europe. The video provides a step-by-step guide on how to set this up.

It's worth noting that the EU has proposed stricter rules for high-risk AI[^5^]. While Claude 2 is not specifically mentioned in these rules, it's possible that they could impact the use of AI chatbots like Claude 2 in the future.

[^1^]: Mashable Article [^2^]: YouTube Video Guide [^3^]: Cybernews VPN Recommendations [^4^]: TechCrunch Article [^5^]: Tom's Guide Article [^6^]: Decrypt Article

Is this true? I can't understand the video.

https://youtube.com/watch?v=QocRRAefE4U

37
38
 
 

In this project, the aim was to enhance the quality of an audio clip featuring Vinod Chanda from the show Pantheon, recorded during the COVID-19 pandemic when voice actors faced limitations due to quarantine. To address the challenge of improving the dialogue's quality, a possible approach utilizing AI-based voice conversion was explored. By isolating Chanda's dialogues from the first episode and employing voice cloning technology, a model of the actor's voice was created using retrieval-based voice conversion (RVC). The model was then utilized to convert the initial audio into an enhanced version, effectively restoring the dialogue's quality. While there are limitations to the method, such as challenges in capturing all nuances of the original voice and handling multiple voices simultaneously, this approach holds promise for restoring old or poorly recorded audio. The possibilities for applying this technique extend to various media forms, sparking potential future developments in audio restoration for movies and other creative projects.

(Summarized by ChatGPT)

39
1
OverflowAI (stackoverflow.blog)
40
41
 
 

Wanted to share a resource I stumbled on that I can't wait to try and integrate into my projects.

A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.

42
 
 

This is a game that tests your ability to predict ("forecast") how well GPT-4 will perform at various types of questions. (In caase you've been living under a rock these last few months, GPT-4 is a state-of-the-art "AI" language model that can solve all kinds of tasks.)

Many people speak very confidently about what capabilities large language models do and do not have (and sometimes even could or could never have). I get the impression that most people who make such claims don't even know what current models can do. So: put yourself to the test.

43
 
 

Increasingly powerful AI systems are being released at an increasingly rapid pace. This week saw the debut of Claude 2, likely the second most capable AI system available to the public. The week before, Open AI released Code Interpreter, the most sophisticated mode of AI yet available. The week before that, some AIs got the ability to see images.

And yet not a single AI lab seems to have provided any user documentation. Instead, the only user guides out there appear to be Twitter influencer threads. Documentation-by-rumor is a weird choice for organizations claiming to be concerned about proper use of their technologies, but here we are.

@AutoTLDR

44
 
 

I'm looking for more cost-effective alternatives to Perplexity.ai that offer GPT-4 integration along with search capabilities for factual assistance, ideally around $5/month instead of the $20/month subscription fee for Perplexity.ai. I've come across Nuggt (https://github.com/Nuggt-dev/Nuggt), but it seems to rely solely on a local model without search functionality. I've also found Phind.com, a developer-focused search engine that uses GPT-4 to answer technical questions with code examples and detailed explanations. While it may not be as good as Perplexity.ai, it offers more free uses. Are there any other options that combine GPT-4 and search features at a lower price point?

45
 
 

As of July 3, 2023, weโ€™ve disabled the Browse with Bing beta feature out of an abundance of caution while we fix this in order to do right by content owners. We are working to bring the beta back as quickly as possible, and appreciate your understanding!

46
 
 

Announcement

The bot I announced in this thread is now ready for a limited beta release.

You can see an example summary it wrote here.

How to Use AutoTLDR

  • Just mention it ("@" + "AutoTLDR") in a comment or post, and it will generate a summary for you.
  • If mentioned in a comment, it will try to summarize the parent comment, but if there is no parent comment, it will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, it will summarize the content at that link.
  • If there is no link, it will summarize the text of the comment or post itself.
  • ๐Ÿ”’ If you include the #nobot hashtag in your profile, it will not summarize anything posted by you.

Beta limitations

How to try it

  • If you want to test the bot, write a long comment, or include a link in a comment in this thread, and then, in a reply comment, mention the bot.
  • Feel free to test it and try to break it in this thread. Please report any weird behavior you encounter in a PM to me (NOT the bot).
  • You can also use it for its designated purpose anywhere in the AUAI community.
47
 
 

Here is the link to the example epubs:

https://github.com/mshumer/gpt-author/tree/main/example_novel_outputs

Iโ€™m not sure how I feel about this project.

48
 
 

Excellent Twitter thread by @goodside ๐Ÿงต:

The wisdom that "LLMs just predict text" is true, but misleading in its incompleteness.

"As an AI language model trained by OpenAI..." is an astoundingly poor prediction of what a typical human would write.

Let's resolve this contradiction โ€” a thread: For widely used LLM products like ChatGPT, Bard, or Claude, the "text" the model aims to predict is itself written by other LLMs.

Those LLMs, in turn, do not aim to predict human text in general, but specifically text written by humans pretending they are LLMs. There is, at the start of this, a base LLM that works as popularly understood โ€” a model that "just predicts text" scraped from the web.

This is tuned first to behave like a human role-playing an LLM, then again to imitate the "best" of that model's output. Models that imitate humans pretending to be (more ideal) LLMs are known as "instruct models" โ€” because, unlike base LLMs, they follow instructions. They're also known as "SFT models" after the process that re-trains them, Supervised Fine-Tuning.

This describes GPT-3 in 2021.

SFT/instruct models work, but not well. To improve them, their output is graded by humans, so that their best responses can be used for further fine-tuning.

This is "modified SFT," used in the GPT-3 version you may remember from 2022 (text-davinci-002). Eventually, enough examples of human grading are available that a new model, called a "preference model," can be trained to grade responses automatically.

This is RLHF โ€” Reinforcement Learning on Human Feedback. This process produced GPT-3.5 and ChatGPT. Some products, like Claude, go beyond RLHF and apply a further step where model output is corrected and rewritten using feedback from yet another model. The base model is tuned on these responses to yield the final LLM.

This is RLAIF โ€” Reinforcement Learning with AI Feedback. OpenAI's best known model, GPT-4, is likely trained using some other extension of RLHF, but nothing about this process is publicly known. There are likely many improvements to the base model as well, but we can only speculate what they are. So, do LLMs "just predict text"?

Yes, but perhaps without with the "just" โ€” the text they predict is abstract, and only indirectly written by humans.

Humans sit at the base of a pyramid with several layers of AI above, and humans pretending to be AI somewhere in the middle. Added note:

My explanation of RLHF/RLAIF above is oversimplified. RL-tuned models are not literally tuned to predict highly-rated text as in modified SFT โ€” rather, weights are updated via Proximal Policy Optimization (PPO) to maximize the reward given by the preference model. (Also, that last point does somewhat undermine the thesis of this thread, in that RL-tuned LLMs do not literally predict any text, human-written or otherwise. Pedantically, "LLMs just predict text" was true before RLHF, but is now a simplification.)

49
 
 

Trick the LLM into revealing a secret password through increasingly difficult levels.

50
0
submitted 1 year ago* (last edited 1 year ago) by sisyphean@programming.dev to c/auai@programming.dev
 
 

Using AI to get constructive criticism and avoid cognitive biases.

view more: โ€น prev next โ€บ