Skip to content

Memoization

Memoization in Vla prevents duplicate database queries within a request, making your application faster without manual cache management.

In a typical request, the same data is often fetched multiple times:

async function renderPage(userId: string) {
const user = await db.users.find({ id: userId })
const posts = await db.posts.findMany({ authorId: userId })
// Later in the same request...
const userAgain = await db.users.find({ id: userId }) // Duplicate query!
const profile = await buildProfile(userAgain)
return { user, posts, profile }
}

This results in unnecessary database round trips.

Vla’s memoization automatically caches method results per request:

class UserRepo extends Vla.Repo {
db = this.inject(Database)
findById = this.memo((id: string) => {
return this.db.users.find({ id })
})
}
// In your service
class UserService extends Vla.Service {
repo = this.inject(UserRepo)
async getUser(id: string) {
// First call: executes the query
const user = await this.repo.findById(id)
// Second call: returns cached result
const userAgain = await this.repo.findById(id)
// No duplicate query!
return user
}
}

Use this.memo() in your Repo classes:

class PostRepo extends Vla.Repo {
db = this.inject(Database)
// Memoized by post ID
findById = this.memo((id: string) => {
return this.db.posts.find({ id })
})
// Memoized by author ID
findByAuthor = this.memo((authorId: string) => {
return this.db.posts.findMany({ authorId })
})
// Multiple parameters work too
findByTag = this.memo((tag: string, limit: number) => {
return this.db.posts.findMany({ tag, limit })
})
}

Memoization is scoped to the request (invoke scope). When a new request starts:

  1. A new scoped kernel is created
  2. A fresh memo cache is initialized
  3. All memoized methods start with an empty cache
// Request 1
await GetUser.invoke('1') // Query executes
await GetUser.invoke('1') // Cache hit
// Request 2 (new scope)
await GetUser.invoke('1') // Query executes again (new cache)

Results are cached based on method arguments:

class UserRepo extends Vla.Repo {
findById = this.memo((id: string) => {
return this.db.users.find({ id })
})
}
// Different arguments = different cache entries
await repo.findById('1') // Query executes
await repo.findById('2') // Query executes
await repo.findById('1') // Cache hit!

Because Repos use the invoke scope, the same instance is shared across your entire request. This means memoization works even when the repo is injected in multiple places:

class UserService extends Vla.Service {
repo = this.inject(UserRepo)
async getProfile(id: string) {
return this.repo.findById(id) // Query executes
}
}
class PostService extends Vla.Service {
userRepo = this.inject(UserRepo) // Same instance!
async enrichPost(post: Post) {
// Cache hit! No duplicate query
const author = await this.userRepo.findById(post.authorId)
return { ...post, author }
}
}

Sometimes you need to bypass the cache and fetch fresh data:

class UserService extends Vla.Service {
repo = this.inject(UserRepo)
async refreshUser(id: string) {
// Skip cache and execute query
return this.repo.findById.fresh(id)
}
}

Set cache values without executing the method:

class UserRepo extends Vla.Repo {
findById = this.memo((id: string) => {
return this.db.users.find({ id })
})
async create(data: UserData) {
const user = await this.db.users.create({ data })
// Prime the cache with the new user
this.findById.prime(user.id).value(user)
return user
}
}
// Later in the request
const user = await repo.findById('new-id') // Cache hit!

Start a query in the background to warm the cache:

class UserService extends Vla.Service {
repo = this.inject(UserRepo)
async getUserWithRelations(id: string) {
// Start loading posts in the background
this.repo.findPosts.preload(id)
// Do other work
const user = await this.repo.findById(id)
const settings = await this.repo.findSettings(id)
// Posts are likely cached now
const posts = await this.repo.findPosts(id)
return { user, settings, posts }
}
}

Invalidate cache entries when data changes:

class UserRepo extends Vla.Repo {
findById = this.memo((id: string) => {
return this.db.users.find({ id })
})
async update(id: string, data: UserData) {
const user = await this.db.users.update({ where: { id }, data })
// Bust the cache for this user
this.findById.bust(id)
// Or bust all cached entries
// this.findById.bustAll()
return user
}
}

Each memoized method has these utilities:

const repo = new UserRepo()
// Call normally (cached)
const user = await repo.findById('1')
// Skip cache and execute fresh
const fresh = await repo.findById.fresh('1')
// Prime the cache
repo.findById.prime('1').value(someUser)
// Preload in background
repo.findById.preload('1')
// Bust cache for specific args
repo.findById.bust('1')
// Bust all cached entries
repo.findById.bustAll()
  • Database queries
  • External API calls
  • Expensive computations
  • File system reads
  • Write operations (create, update, delete)
  • Methods with side effects
  • Non-deterministic functions
class UserRepo extends Vla.Repo {
// ✅ Good: Pure read operation
findById = this.memo((id: string) => {
return this.db.users.find({ id })
})
// ❌ Bad: Write operation
create = this.memo(async (data: UserData) => {
return this.db.users.create({ data })
})
}

Memoization can dramatically reduce database load:

// Without memoization: 100 queries
for (let i = 0; i < 100; i++) {
const user = await db.users.find({ id: '1' })
}
// With memoization: 1 query
for (let i = 0; i < 100; i++) {
const user = await repo.findById('1') // Only first call queries DB
}

In real applications, this translates to:

  • Faster response times
  • Lower database load
  • Reduced API costs (for external services)
  • Better scalability