Grok-1: The New Open-Source LLM with 314 Billion Parameters

· algiegray's blog

Key takeaways:

  1. Grok-1, a new large language model (LLM) with 314 billion parameters, has been open-sourced by xAI, a team created by Elon Musk.
  2. Grok-1 was not fine-tuned with LRHF, making it a genuine LLM that can answer politically incorrect questions without being polite.
  3. The conversation interface for Grok-1 is not yet available in Europe, but a VPN can be used to sign up for the waitlist.
  4. Companies like Perplexity and SolvAI plan to fine-tune and optimize Grok-1 for their services.

# Introduction

Grok-1, a large language model (LLM) with 314 billion parameters, has been open-sourced by xAI, a team established by Elon Musk. This move marks the largest release of a genuinely large LLM to date.

# Grok-1: A Genuine LLM without LRHF

Grok-1 was not subjected to the LRHF procedure, which means it was not fine-tuned to provide only "safe" and "useful" answers. This authenticity gives it research significance, as it can respond to politically incorrect questions and need not be polite.

# Accessing Grok-1

The conversation interface for Grok-1 is currently not available in Europe. However, using a VPN allows users to sign up for the waitlist to test the model.

# Companies Fine-Tuning Grok-1

Perplexity's CEO has announced plans to add Grok-1 to their services, requiring fine-tuning and optimization. SolvAI also plans to fine-tune and optimize Grok-1 for conversational search.

# Community Reactions

The community has reacted positively to the release of Grok-1, with some expressing excitement about its potential applications and others noting the model's large weight of 318GB.