Tensormesh raises $4.5M to squeeze more inference out of AI server loads

Tensormesh uses an expanded form of KV caching to make inference loads as much as 10 times more efficient.