EngBrief
TopicsSources⌘K
EngBrief

Engineering insights from the world's best tech companies, curated and summarized.

Get the weekly brief

Browse

TopicsSourcesFavorites

More

SearchRSS Feed

© 2026 EngBrief

Updated every 4 hours

dropbox.tech icon

Dropbox Tech Engineering Blog

10 articles on EngBrief

Dropbox Tech explores how the company builds and scales its file sync, collaboration, and storage platform. Posts cover distributed storage systems, desktop client engineering, machine learning for content intelligence, security architecture, and the migration from AWS to their own custom infrastructure.

Distributed StorageDesktop EngineeringMachine LearningInfrastructure
Visit Dropbox Tech blog →

Latest Articles

Dropbox Tech1 min4d ago

Improving storage efficiency in Magic Pocket, our immutable blob store

Here's a concise summary of the blog post in 2-3 sentences: Dropbox's immutable blob store, Magic Pocket, faced a significant increase in storage overhead due to increased fragmentation and under-filled storage volumes after a new service rollout. The existing compaction strategy couldn't effectively reclaim space from the long tail of severely under-filled volumes, highlighting a limitation in the steady-state approach. Dropbox developed a multi-strategy approach to drive overhead back down, combining compaction, garbage collection, and efficient redundancy to achieve better storage efficiency and control at exabyte scale.

Magic PocketData StoragePerformance Optimization
Dropbox Tech1 min12d ago

Reducing our monorepo size to improve developer velocity

Dropbox's server monorepo grew to 87GB, slowing down engineering velocity due to long clone times and approaching the 100GB GitHub limit. The team discovered that Git's delta compression was inefficiently storing internationalization files, causing rapid repository growth. A locally tested fix using an experimental flag reduced the repository size to 20GB, but further investigation showed that this solution was incompatible with GitHub's server-side optimizations.

Monorepo ManagementDeveloper Productivity
Dropbox Tech1 min20d ago

How we optimized Dash's relevance judge with DSPy

Dropbox's engineering team optimized Dash's relevance judge using DSPy, a framework for systematically optimizing prompts against a measurable objective. To adapt their existing judge for a lower-cost model, they defined a clear objective (minimizing disagreement with human relevance judgments while ensuring usable outputs) and used DSPy's GEPA optimizer to generate structured feedback for each example where the model disagreed with humans. This prompted a repeatable optimization loop, resulting in a more reliable and cheaper judge for production use.

PerformanceAI
DSPyDashPrompt EngineeringOptimization TechniquesAutomated Loop
Dropbox Tech1 minFeb 26, 2026

Using LLMs to amplify human labeling and improve Dash search relevance

Dropbox's Dash search engine uses a retrieval-augmented generation (RAG) pattern to generate responses, relying on large language models (LLMs) to analyze relevant content and ground responses. To improve search relevance, Dash pairs human labeling with LLM-assisted labeling, starting with a small amount of internal, human-labeled data and then amplifying efforts with LLMs to produce relevance labels at scale. This combination allows Dropbox to train Dash's search ranking models with high-quality labeled examples, resulting in improved search relevance and more accurate responses.

AI
LLMshuman labelingAI/MLSearch RelevanceModel TrainingData Labeling
Dropbox Tech1 minFeb 12, 2026

How low-bit inference enables efficient AI

Dropbox engineers have adopted low-bit inference to improve the efficiency of their AI models, reducing memory and compute requirements. This technique involves quantizing tensors to lower precision, such as from 16-bit to 8-bit, which reduces memory footprint and enables faster processing, especially on NVIDIA GPUs with Tensor Cores. By leveraging these specialized cores, low-bit inference can double throughput and improve energy efficiency. Modern AI models, especially attention-based architectures, are computationally expensive due to repeated matrix multiplications. Low-bit inference helps mitigate this by scaling the performance of GPU cores, such as Tensor Cores, which can perform more operations per second at lower precision. This allows for efficient computation on large-scale linear algebra in neural networks.

AI
low-bit inferenceAIEfficient AIResource Management
Dropbox Tech1 minFeb 11, 2026

Insights from our executive roundtable on AI and engineering productivity

Dropbox hosted an executive roundtable on AI and engineering productivity to share best practices with top companies. To accelerate progress, Dropbox tied AI tooling to tangible business results, prioritizing adoption and experimenting with various AI workflows. Results showed a significant increase in engineering productivity, with most developers using AI tools and a reduction in friction related to adoption.

AI
AI coding toolsCursorEngineering ProductivityAI Adoption
Dropbox Tech1 minJan 28, 2026

Engineering VP Josh Clemm on how we use knowledge graphs, MCP, and DSPy in Dash

Here's a 2-3 sentence summary of the engineering blog post: Dropbox developed Dash, an AI-powered platform that connects and analyzes content from various third-party apps and services within a single interface, using knowledge graphs and a highly secure data store. To build Dash, the team engineered a context engine that consists of connectors, a content understanding layer, and a graph-based model for storing and retrieving relevant information. By choosing index-based retrieval over federated retrieval, Dropbox was able to improve data freshness, access company-wide connectors, and enhance offline ranking experiments, but required significant custom work and handling of ingestion time challenges.

DatabasesPerformance
Knowledge GraphsMCPDSPyDashAPI DesignData PipelineKnowledge ManagementFullstack DevelopmentMachine Learning
Dropbox Tech1 minDec 18, 2025

Inside the feature store powering real-time AI in Dropbox Dash

Dropbox Dash, a real-time AI-powered workspace, relies on a feature store to manage and deliver data signals to its ranking system. To meet its sub-100ms latency requirements and massive parallel read performance, Dropbox built a hybrid feature store using Feast, AWS DynamoDB, and Spark. The system serves features quickly, adapting to changing user behavior, and integrates with their existing infrastructure. The feature store architecture combines Feast's orchestration layer and serving APIs with a Go service for feature serving, cloud-based storage for offline indexing, and Spark jobs for feature ingestion and computation. This setup ensures a streamlined experience for engineers while abstracting away offline and online data management, pipeline orchestration, and data freshness guarantees. A three-part ingestion system was developed to balance freshness with reliability, allowing for real-time incorporation of new user signals while handling complex transformations and reducing infrastructure overload. The system consistently achieves p95 latencies in the 25-35ms range, making it possible to reliably meet Dash's latency targets

AI
feature storeReal-Time AIData RetrievalContextual Ranking
Dropbox Tech1 minNov 26, 2025

Building the future: highlights from Dropbox’s 2025 summer intern class

Here's a 2-3 sentence summary of the Dropbox 2025 summer intern class: This year's cohort of 43 interns at Dropbox's Camp Dropbox Intern Program made meaningful contributions to various teams, including engineering, AI, and search infrastructure. Through projects such as file history tracking, AI Sentinel, and cache optimization, interns tackled high-impact tasks aligned with company goals and cultivated growth, innovation, and connections. As part of the program, interns also benefited from over 6,000 hours of one-on-one mentorship, Virtual First events, and the Emerging Talent Summit, shaping their time at Dropbox and contributing to the development of Dropbox Dash, an AI-powered universal search product.

Dropbox Tech1 minNov 17, 2025

How Dash uses context engineering for smarter AI

Here is a 2-3 sentence summary of the engineering blog post: To build a more intelligent and agentic AI, Dropbox's Dash uses context engineering to limit the information the model sees and focus on what's relevant. They achieved this through strategies such as providing a single interface for retrieval tools, filtering context to only useful information, and introducing specialized agents for complex tasks requiring deeper reasoning. As a result, Dash's performance has improved, and the model can make faster, better decisions, ultimately leading to better outcomes.

AI
AIContext Engineering