this post was submitted on 28 Nov 2024
72 points (98.6% liked)

Technology

37800 readers
84 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://feddit.org/post/5167597

The large language model of the OpenGPT-X research project is now available for download on Hugging Face: "Teuken-7B" has been trained from scratch in all 24 official languages of the European Union (EU) and contains seven billion parameters. Researchers and companies can leverage this commercially usable open source model for their own artificial intelligence (AI) applications. Funded by the German Federal Ministry of Economic Affairs and Climate Action (BMWK), the OpenGPT-X consortium – led by the Fraunhofer Institutes for Intelligent Analysis and Information Systems IAIS and for Integrated Circuits IIS – have developed a large language model that is open source and has a distinctly European perspective.

[...]

The path to using Teuken-7B

Interested developers from academia or industry can download Teuken-7B free of charge from Hugging Face and work with it in their own development environment. The model has already been optimized for chat through “instruction tuning”. Instruction tuning is used to adapt large language models so that the model correctly understands instructions from users, which is important when using the models in practice – for example in a chat application.

Teuken-7B is freely available in two versions: one for research-only purposes and an “Apache 2.0” licensed version that can be used by companies for both research and commercial purposes and integrated into their own AI applications. The performance of the two models is roughly comparable, but some of the datasets used for instruction tuning preclude commercial use and were therefore not used in the Apache 2.0 version.

Download options and model cards can be found at the following link: https://huggingface.co/openGPT-X

you are viewing a single comment's thread
view the rest of the comments
[–] wagesj45@fedia.io 6 points 3 weeks ago (3 children)

This is awesome. Wish the USA could do stuff like this.

[–] 0x815 6 points 3 weeks ago (1 children)

The USA can certainly do this, they have all what it takes. Public investments for such stuff will be hard to get in the next four years I guess, but there could be some private initiative?I don't know the U.S. good enough in that respect, though.

[–] wagesj45@fedia.io 4 points 3 weeks ago (1 children)

Well, we do have some private companies that are doing things like this, such as Meta with its Llama models and Google with their smaller gemma models. But I would love for there to be some publicly funded options that truly belong to all of us.

[–] 0x815 3 points 3 weeks ago

Yeah, there are many FOSS organizations in the U.S. like the Open Source Lab by the Oregon State University, the Open Source Software Institute, and many others. I guess they could do it, possibly if some join forces.

load more comments (1 replies)