When she says she loves LoRA and you're thinking about the wireless communication protocol for IoT devices, but she's actually talking about Low-Rank Adaptation for fine-tuning large language models. Classic miscommunication between hardware and AI engineers.
For the uninitiated: LoRA (Low-Rank Adaptation) is a technique that lets you fine-tune massive AI models without needing to retrain the entire thing—basically adding a lightweight adapter layer instead of modifying all the weights. It's like modding your game with a 50MB patch instead of redownloading the entire 100GB game. Genius, really.
Meanwhile, the other LoRA is a long-range, low-power wireless protocol perfect for sending tiny packets of data across kilometers. Two completely different worlds, same acronym. The tech industry's favorite pastime: reusing abbreviations until nobody knows what anyone's talking about anymore.
AI
AWS
Agile
Algorithms
Android
Apple
Bash
C++
Csharp