Sitemap

A Real-World Approach to Improving a Website’s Information Architecture

6 min readJun 6, 2025

Just like a cluttered pantry, a confusing website structure can frustrate users and make it harder for them to find what they need. But how do you prove to stakeholders that a cleanup is necessary?

In this post, I share a real-world approach taken by a cross-functional team to validate a website’s information architecture. From tracking behavioral analytics to conducting talk-aloud card sorts, this method helped pinpoint areas where users struggled and paved the way for smarter, user-friendly improvements.

We redesigned the site’s structure based on the findings of this research. In the final round of testing, far more users agreed on where they would find critical information. Our research marked a 394% improvement in agreement among users on where they would expect to find information compared to the initial benchmarking. This increase in agreement showed the team that the new site categories and groupings would make information much easier to find.

This is the first of three posts diving into IA research, starting with how to evaluate your current structure. Let’s dig in!

Understanding the Problem

When tackling an information architecture project, you might consider three phases of research:

1. Validating the Current Structure

2. Identifying Gaps or Issues

3. Enhancing the User Experience

In this post, we’ll focus on how to validate your current structure.

Example visual representation of a site architecture.

Validating the Current Structure

If your website is hard to navigate, it can frustrate users who struggle to find what they need. There are two steps you can take to identify what aspects of your site may need improvement:

Step One: Monitor Behavioral Analytics

Step Two: Conduct a Talk-Aloud Closed Card Sort or Tree Test

Step One: Monitor Behavioral Analytics

Low Page Views: If some main categories have many page views while others have very few, it may indicate that the categories with low page views are irrelevant or confusing to users. Or, if categories with high page views are more prominent or appear in multiple places, the design layout may be a factor. As a team, you should also not go by page views alone. If a category is strategically important but only relevant to a few users, low page views might be expected.

Low Click-Through Rate (CTR): The design layout of many category pages drives deeper engagement with the site. If users are not clicking on important links or buttons on a category page, this may suggest that the labels are unclear or the placement isn’t intuitive. However, this is not always the case. You should evaluate based on your site’s design and navigation system.

Engagement Rate: If users spend little time on important category pages and the pages have low interaction levels, this could indicate irrelevant content or a difficult navigation structure.

Exits: If visitors quickly leave a category page, it could mean they couldn’t find the information they were looking for or that the content didn’t meet their expectations. However, they might also leave because they found exactly what they needed. It’s essential to assess the overall experience before drawing conclusions.

Abandonment Rate: If there is a high rate of users abandoning forms or checkout processes unexpectedly, this may indicate confusion related to labels or navigation patterns.

Site Search: Frequent reliance on the search function rather than browsing can indicate difficulties in navigating content effectively. However, some users default to search not because navigation is flawed, but because they have learned to search for information as their primary method of finding answers. Analyzing search terms can help identify potential content gaps or determine whether key information is buried too deep within the site structure.

What Our Team Did: Behavioral Analytics & Page Feedback

We tracked page entries, exits, category engagement, site searches, and the completion of key user journeys. This quantitative data was combined with page feedback gathered from users on each page.

We offered users the chance to share whether they found what they were looking for, along with the opportunity to elaborate on their search intentions. Thousands of comments were collected, categorized, and analyzed alongside site metrics and card sorting results.

This comprehensive approach helped us uncover hidden content, clarify confusing navigation labels, and identify new content needs.

Step Two: Conduct a Talk-Aloud Closed Card Sort or Tree Test

While behavioral analytics can show you what people are doing on your site, it cannot tell you why people are doing it. Two approaches, a closed card sort or a tree test, combined with asking participants to talk aloud during the process, can give you valuable insights into people’s thought patterns, any confusion they might have, and how they understand the information.

What Our Team Did: Content Audit and Closed Card Sort

Content Audit

Prior to conducting a closed card sort, our team performed a content audit of the website. We took these steps:

Inventory & Categorization: We recorded content elements such as pages, documents, media, and interactive components. Then, we documented details on each content element, including H1, H2, H3, URLs, existing content category and descriptions.

Content Quality & Selection: We identified content elements or topics that were outdated, duplicated, or unnecessary. If the content details (such as H2s and H3s) were not written in plain language or were too broad, we marked them for revision in the next step.

Since this was a small site, we did not need automated extraction. If we were dealing with thousands of content elements, we would have required a site crawler like Screaming Frog, Sitebulb, or custom scripts to extract the content details into a spreadsheet.

Define Categories and Topics: It’s recommended to choose no more than 40 topics for a card sort. We narrowed our site’s content to 37 topics that would be sorted into 6 categories.

Closed Card Sort

We conducted a Closed Card Sort using a research platform that allows for recording during the test. We asked 15 people to sort 37 topics from our site into six existing categories.

Our team started with a closed card sort and did not conduct a tree test.

How a closed card sort helped our team:

  • Seeing what items participants grouped together was just as important as validating the category labels.
  • A card sort let us see what items were consistently grouped together across a series of closed card sorts and open card sorts with the same 37 topics but different category labels.

When we analyzed the results from our initial card-sorting exercise, we saw clear patterns in how participants grouped topics.

Dendrogram results from our closed card sort to validate the current structure.

Then, when we looked at a second round of sorting, where participants created their own groupings, we noticed similar patterns emerging again.

Dendrogram results from our closed card sort to validate the current structure.

These insights helped us refine the way topics were grouped first. After that, we focused on naming each group, revving off the participant-generated labels. This process wasn’t a straight path. We kept revisiting our visualizations to fine-tune both the groupings and the labels.

Up Next: Identifying Gaps or Issues

In the next post, I’ll discuss the process of defining the new groups and labels in more detail. It’ll include details on how we used the behavioral analytics, page feedback and closed card sort to define:

  • What items users struggled to categorize
  • What items did not fit into existing groups
  • Which labels or terms caused confusion or inconsistency in understanding
  • What items were most frequently grouped together

Spoiler alert: we had a messy pantry!

--

--

Laura Cochran
Laura Cochran

No responses yet