Large Language Model (LLM)

Print

Large Language Models (LLM) are deep learning algorithms that use massively large data sets of information to understand, predict, summarize and generate new content.  A common application is to understand a human language text query and generate a text response synthesizing multiple information resources.