Groq, a company that manufactures artificial intelligence chips, claims that their use will allow neural networks to work much faster. This will help to break the barrier of artificiality in communication with a chatbot, Gizmodo writes.

And that’s not far from the truth, Groq’s chatbot demonstrates incredible speed, generating a response to users in a fraction of a second, quoting the sources used. In one of the demonstrations, a CNN anchor talked to a chatbot and was impressed by the speed of the responses.

Groq creates AI chips called Language Processing Units (LPUs) and claims that they are faster than NVIDIA graphics processing units (GPUs). NVIDIA GPUs are considered the industry standard for running AI models, but early results show that LPUs can dethrone them.

Groq is an inference engine, not a chatbot like ChatGPT or Gemini. It does not work by itself and is not intended to replace them. It helps these chatbots work faster. On the Groq website, you can personally test the new product.

Groq produces 247 tokens per second, compared to Microsoft’s 18 tokens per second, according to test results from Artificial Analysis published last week. This means that ChatGPT could run more than 13 times faster if it were powered by Groq chips.

According to the company’s CEO, Jonathan Ross, Groq’s LPUs bypass two bottlenecks of LLMs that GPUs or CPUs have trouble with: compute density and memory bandwidth.

It is possible that in the near future we may hear about lawsuits against the company from Elon Musk through the Grok chatbot developed by his company xAI.

The name Grok comes from the title of Robert Heinlein’s 1961 science fiction book Stranger in a Strange Land. The word means “to understand deeply and intuitively.”