The Problem with Deep Nesting
Every API you integrate with has its own idea of how to structure data. Stripe nests payment method details three levels deep. GitHub wraps commit data inside layers of repository and ref objects. The Shopify API returns product variants nested inside products nested inside collections.
You end up writing code like response.data.order.customer.address.shipping.zip and praying that none of those intermediate objects are null. When they are, you get the most useless error message JavaScript has to offer: Cannot read properties of undefined.
Here are the techniques I actually use to handle this.
Optional Chaining: The First Line of Defense
Optional chaining (?.) is the simplest fix for deeply nested access. But most developers only use it for property access and forget it works for method calls and bracket notation too.
Three forms of optional chaining:
const api = {
user: {
profile: {
addresses: [
{ type: "shipping", zip: "90210" }
]
},
getName() { return "Alice"; }
}
};
// Property access
const zip = api.user?.profile?.addresses?.[0]?.zip;
// "90210"
// Bracket notation (useful for dynamic keys)
const field = "zip";
const zipDynamic = api.user?.profile?.addresses?.[0]?.[field];
// "90210"
// Method calls
const name = api.user?.getName?.();
// "Alice"
// When the chain breaks, you get undefined (not an error)
const missing = api.user?.billing?.card?.last4;
// undefinedThe catch: optional chaining is great for reading, but it does not help when you need to extract multiple values from the same nested structure. If you are pulling 10 fields out of a deeply nested object, you will end up repeating the same chain prefix over and over. That is where destructuring and flattening come in.
Flattening Nested Objects
When you need to export nested JSON to CSV or feed it into a system that only understands flat key-value pairs, you need a flattening function. The idea is simple: walk the object recursively, and join nested keys with a delimiter.
A recursive flatten function:
function flattenObject(
obj: Record<string, unknown>,
prefix = "",
separator = "."
): Record<string, unknown> {
const result: Record<string, unknown> = {};
for (const [key, value] of Object.entries(obj)) {
const newKey = prefix ? `${prefix}${separator}${key}` : key;
if (
value !== null &&
typeof value === "object" &&
!Array.isArray(value)
) {
Object.assign(
result,
flattenObject(value as Record<string, unknown>, newKey, separator)
);
} else {
result[newKey] = value;
}
}
return result;
}
// Usage
const order = {
id: "ord_123",
customer: {
name: "Alice",
address: {
street: "123 Main St",
city: "Portland",
state: "OR"
}
},
total: 59.99
};
console.log(flattenObject(order));
// {
// "id": "ord_123",
// "customer.name": "Alice",
// "customer.address.street": "123 Main St",
// "customer.address.city": "Portland",
// "customer.address.state": "OR",
// "total": 59.99
// }One thing to watch out for: arrays. The function above skips them intentionally. If you flatten arrays, you get keys like items.0.name, which is rarely what you want in a CSV. Better to handle arrays separately, either by JSON-stringifying them or by creating one row per array element.
Querying with JSONPath
When you have a large JSON document and need to find specific values buried somewhere in the structure, JSONPath is far more practical than writing chains of optional access by hand. Think of it as XPath for JSON.
Common JSONPath expressions:
const data = {
store: {
books: [
{ title: "Clean Code", price: 34.99, author: "Robert Martin" },
{ title: "Refactoring", price: 47.99, author: "Martin Fowler" },
{ title: "DDIA", price: 39.99, author: "Martin Kleppmann" }
],
location: { city: "Seattle", state: "WA" }
}
};
// All book titles
// $.store.books[*].title
// -> ["Clean Code", "Refactoring", "DDIA"]
// First book
// $.store.books[0]
// -> { title: "Clean Code", price: 34.99, author: "Robert Martin" }
// Books cheaper than $40
// $.store.books[?(@.price < 40)]
// -> [{ title: "Clean Code", ... }, { title: "DDIA", ... }]
// All authors with "Martin" in the name (recursive descent)
// $..author
// -> ["Robert Martin", "Martin Fowler", "Martin Kleppmann"]
// All prices, anywhere in the document
// $..price
// -> [34.99, 47.99, 39.99]JSONPath is especially useful when you are exploring an unfamiliar API response and do not yet know the exact path to the data you need. The recursive descent operator (..) lets you search the entire document without knowing the structure. Once you have found what you need, you can write the specific path for production code.
Reshaping API Responses
The most common real-world problem is not just accessing nested data. It is transforming the shape of an API response into the structure your frontend components actually need. Your UI wants a flat list of items with specific field names; the API gives you a nested graph of related objects.
Reshaping a typical API response:
// What the API gives you
const apiResponse = {
data: {
orders: [
{
id: "ord_1",
created_at: "2025-01-15T10:30:00Z",
line_items: [
{
product: { name: "Widget", sku: "WDG-001" },
quantity: 2,
unit_price: { amount: 1999, currency: "USD" }
}
],
shipping: {
address: { city: "Portland", state: "OR" },
method: "express"
}
}
],
pagination: { page: 1, total_pages: 5 }
}
};
// What your component needs
interface OrderRow {
orderId: string;
date: string;
productName: string;
sku: string;
quantity: number;
priceFormatted: string;
city: string;
shippingMethod: string;
}
function reshapeOrders(response: typeof apiResponse): OrderRow[] {
return response.data.orders.flatMap(order =>
order.line_items.map(item => ({
orderId: order.id,
date: new Date(order.created_at).toLocaleDateString(),
productName: item.product.name,
sku: item.product.sku,
quantity: item.quantity,
priceFormatted: `$${(item.unit_price.amount / 100).toFixed(2)}`,
city: order.shipping.address.city,
shippingMethod: order.shipping.method
}))
);
}
// Result: a flat array of rows, ready for a table or CSV export
// [
// {
// orderId: "ord_1",
// date: "1/15/2025",
// productName: "Widget",
// sku: "WDG-001",
// quantity: 2,
// priceFormatted: "$19.99",
// city: "Portland",
// shippingMethod: "express"
// }
// ]Notice the use of flatMap instead of map. When an order has multiple line items, map would give you an array of arrays. flatMap flattens it into a single array of rows, which is exactly what a table or CSV needs.
Build these reshape functions as a dedicated layer between your API client and your UI. When the API changes its structure (and it will), you only update one function instead of hunting through every component that touches the data.
Nested JSON to CSV: Handling the Edge Cases
Converting nested JSON to CSV sounds straightforward until you run into arrays, null values, and inconsistent structures across rows. Here is a function that handles the common edge cases.
Robust nested-to-CSV conversion:
function nestedJsonToCsv(
items: Record<string, unknown>[]
): string {
// Step 1: Flatten all items
const flatItems = items.map(item =>
flattenObject(item as Record<string, unknown>)
);
// Step 2: Collect ALL keys across all items
// (different items might have different nested structures)
const allKeys = new Set<string>();
flatItems.forEach(item => {
Object.keys(item).forEach(key => allKeys.add(key));
});
const headers = Array.from(allKeys).sort();
// Step 3: Build CSV rows
const escapeCell = (val: unknown): string => {
if (val === null || val === undefined) return "";
const str = Array.isArray(val) ? JSON.stringify(val) : String(val);
// Escape quotes and wrap in quotes if needed
if (str.includes(",") || str.includes('"') || str.includes("\n")) {
return '"' + str.replace(/"/g, '""') + '"';
}
return str;
};
const rows = flatItems.map(item =>
headers.map(h => escapeCell(item[h])).join(",")
);
return [headers.join(","), ...rows].join("\n");
}
// Usage
const users = [
{ name: "Alice", prefs: { theme: "dark", lang: "en" } },
{ name: "Bob", prefs: { theme: "light", notifications: true } }
];
console.log(nestedJsonToCsv(users));
// name,prefs.lang,prefs.notifications,prefs.theme
// Alice,en,,dark
// Bob,,true,lightThe key detail here is step 2: collecting keys from all items, not just the first one. Real-world data is messy. Some objects have fields that others do not. If you only look at the first item for headers, you will silently drop columns.