Meta AI announced the launch of ‘Code Llama,’ a community-licensed artificial intelligence (AI) coding tool built on the Llama2 large language model (LLM), on Aug. 24.
The new tool is a fine-tuned version of LLama2 that’s been trained specifically for the purpose of generating and discussing computer code.
According to a blog post from Meta, Code Llama is split into several variants with one model fine-tuned for general coding in a number of languages (including Python, C++, Java, PHP, Typescript, C#, Bash and more).
Other models include Code Llama Python and Code Llama Instruct. The former is fine-tuned for Python applications. As Meta puts it, this is to further support the AI community:
According to Meta, the emphasis in these first two model variants is on understanding, explaining, and discussing code.
Code Llama Instruct, however, is the fine-tuned version of Code Llama that Meta recommends for actually generating code. According to the blog post, it's been engineered specifically to generate “helpful and safe answers in natural language."
The models are also available in different parameter sizes in order to operate in different environments. Code Llama comes in 7-billion, 14-billion, and 34-billion parameter sizes, each with different functionality.
Meta says the 7B models, for example, can run on a single GPU. While the 14B and 34B models would require more substantial hardware, they’re also capable of more complex tasks — especially those requiring low-latency feedback such as real-time processes.
CodeLlama -- a version of Llama2 that was fine-tuned for code tasks is live now. Available in 7B, 13B and 34B.https://t.co/4PQBlchlrn pic.twitter.com/LEz9z44IkI
Code Llama is generally available under the same community
Read more on cointelegraph.com