ChatGPT

8775 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
1
2
 
 
3
4
5
 
 

Notes:

  • I hit two times the key after the question. Don’t know if that affected the answer.
  • I don't have any preloaded context.
6
14
submitted 3 weeks ago* (last edited 3 weeks ago) by httpjames@sh.itjust.works to c/chatgpt@lemmy.world
 
 

Screenshot taken last week. Will roll out this week (week of July 28).

7
8
9
82
Be nice (lemmy.world)
submitted 1 month ago by lawrence@lemmy.world to c/chatgpt@lemmy.world
 
 
10
 
 
11
12
18
submitted 1 month ago* (last edited 1 month ago) by Timely_Jellyfish_2077@programming.dev to c/chatgpt@lemmy.world
 
 

ChatGPT can be used without logging in. But there is a catch, we can't opt out of data training.

Never heard about it anywhere. OpenAI may have secretly released it.

13
14
15
16
 
 

There is no longer an option to use ChatGPT without an ID on chatgpt.com for me. Is anyone else having the same problem?

17
 
 

Check out our open-source, language-agnostic mutation testing tool using LLM agents here: https://github.com/codeintegrity-ai/mutahunter

Mutation testing is a way to verify the effectiveness of your test cases. It involves creating small changes, or “mutants,” in the code and checking if the test cases can catch these changes. Unlike line coverage, which only tells you how much of the code has been executed, mutation testing tells you how well it’s been tested. We all know line coverage is BS.

That’s where Mutahunter comes in. We leverage LLM models to inject context-aware faults into your codebase. As the first AI-based mutation testing tool, Our AI-driven approach provides a full contextual understanding of the entire codebase by using the AST, enabling it to identify and inject mutations that closely resemble real vulnerabilities. This ensures comprehensive and effective testing, significantly enhancing software security and quality. We also make use of LiteLLM, so we support all major self-hosted LLM models

We’ve added examples for JavaScript, Python, and Go (see /examples). It can theoretically work with any programming language that provides a coverage report in Cobertura XML format (more supported soon) and has a language grammar available in TreeSitter.

Here’s a YouTube video with an in-depth explanation: https://www.youtube.com/watch?v=8h4zpeK6LOA

Here’s our blog with more details: https://medium.com/codeintegrity-engineering/transforming-qa-mutahunter-and-the-power-of-llm-enhanced-mutation-testing-18c1ea19add8

Check it out and let us know what you think! We’re excited to get feedback from the community and help developers everywhere improve their code quality.

18
 
 

Over the weekend (this past Saturday specifically), GPT-4o seems to have gone from capable and rather free for generating creative writing to not being able to generate basically anything due to alleged content policy violations. It'll just say "can't assist with that" or "can't continue." But 80% of the time, if you regenerate the response, it'll happily continue on its way.

It's like someone updated some policy configuration over the weekend and accidentally put an extra 0 in a field for censorship.

GPT-4 and GPT 3.5 seem unaffected by this, which makes it even weirder. Switching to GPT 4 will have none of the issues that 4o is having.

I noticed this happening literally in the middle of generating text.

See also: https://old.reddit.com/r/ChatGPT/comments/1droujl/ladies_gentlemen_this_is_how_annoying_kiddie/

https://old.reddit.com/r/ChatGPT/comments/1dr3axv/anyone_elses_ai_refusing_to_do_literally_anything/

19
 
 

Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn't know the answer, it would have been trustworthy.

20
 
 

Company website: https://ssi.inc

21
 
 
22
23
2
submitted 2 months ago* (last edited 2 months ago) by Wilshire@lemmy.world to c/chatgpt@lemmy.world
24
 
 

Has anyone else noticed this kind of thing? This is new for me:

            povies.append({
                'tile': litte,
                're': ore,
                't_summary': put_summary,
                'urll': til_url
            })

"povies" is an attempt at "movies", and "tile" and "litte" are both attempts at "title". And so on. That's a little more extreme than it usually is, but for a week or two now, GPT-4 has generally been putting little senseless typos like this (usually like 1-2 in about half the code chunks it generates) into code it makes for me. Has anyone else seen this? Any explanation / way to make it stop doing this?

25
view more: next ›