Common developer challenges in Large Language Model projects

This session is about the new problems that developers can encounter with new LLM-based projects. Some problems are easy to solve with a few experiments, like adjusting search indexes and tuning for optimal data retrieval. Others are more serious, like the difficulty of keeping context in large token windows or finding the right balance between performance, cost, and complexity.

Key topics

  • Difference between model attention window and size window for LLMs.
  • Response evaluation problem with auto ML evaluation and human evaluation.
  • Model and RAG poisoning by external actors and company employees.
  • Finding a balance with RAG data ingestion of different document types and repeatable information.
  • Multi-agent/LLM approach for validation and secondary fact-checking.
  • Join this talk to learn more about these problems, strategies to minimize their impact, and how to prepare for your next LLM project.

 

Share this on...