Knowledge Graph Embeddings (KGE) for RAG-LLMs. Our goal was to compare the mathematical differences between Traditional Static Multimodal Vector Embeddings (TVE) from Word2Vec and CLIP encoders for {text:image} datasets, and Knowledge Graph Embeddings (KGE) generated with REBEL triplets trained on PyKeen. -
View it on GitHub