Advent of Code 2024

A log of my experience with Advent of Code 2024, there are spoilers in here so beware.

December 25, 2024 by Kyle Dickey


All of my Advent of Code solutions are available on GitHub.

Day 25: Code Chronicle

Merry Christmas!! Today is the final day of Advent of Code 2024!

Today’s puzzle, appropriately being Christmas Day, was about analyzing lock and key combinations in a fancy virtual five-pin tumbler system. We were given schematics for various locks and keys, and needed to determine which ones could work together based on their pin heights.

The schematics used a simple ASCII format where ’#’ represented filled space and ’.’ represented empty space. The key insight was that locks had their top row filled and bottom row empty, while keys had the opposite pattern. Each schematic could be converted into a sequence of heights by counting the number of filled spaces in each column.

I started by creating a function to parse the heights from the schematic format:

func parsePattern(lines []string) []int {
    heights := make([]int, 5)
    for row := range lines {
        for col := range lines[row] {
            if lines[row][col] == '#' {
                heights[col]++
            }
        }
    }
    // Adjust for the fixed top/bottom row
    for i := range heights {
        heights[i]--
    }
    return heights
}

The core logic involved checking if a lock and key could fit together. This meant ensuring that for each column, the combined height of the lock’s pin and the key’s pin didn’t exceed the available space:

func canFit(lock, key []int) bool {
    for i := range lock {
        if lock[i] + key[i] > 5 {
            return false
        }
    }
    return true
}

Part 1 simply required counting all valid lock/key combinations. Since this was the final day of Advent of Code 2024, there was no Part 2 - just a heartwarming conclusion to the story about helping the Chief Historian complete Santa’s chronicle in time for the sleigh launch.

Thoughts

Today’s puzzle was a nice Christmas present - interesting enough to be engaging but not so complex that it would take away from holiday celebrations. The visualization of the pins and keys using ASCII art made it particularly fun to work with, and the physical analogy to tumbler locks helped make the problem intuitive.

The most elegant aspect was how the puzzle took something as complex as a real pin tumbler lock and abstracted it into a simple height-matching problem. The fact that we only needed to track column heights rather than the full lock/key geometry made the solution clean and straightforward.

I’d rate this a 4/10 difficulty - while it required careful attention to detail in parsing the schematics and checking compatibility, the core logic was relatively straightforward. The lack of a Part 2 was a nice touch, letting us wrap up the Advent of Code journey on a satisfying note.

Merry Christmas, and see you next year!


Day 24: Crossed Wires

Today’s puzzle placed us in a simulation of boolean logic gates and wires. We were given a network of AND, OR, and XOR gates, with initial wire values that propagate through the system. The twist was that some gates had their output wires accidentally swapped, and we needed to figure out which ones.

For Part 1, we needed to calculate the final values on all z-wires after evaluating the circuit. I approached this using a map to track wire values and gates:

type GateInfo struct {
    operation int // 0=AND, 1=OR, 2=XOR
    inputs    []string
    output    string
}

The core logic involved evaluating gates as soon as their input values became available:

for len(gates) > 0 {
    for wireName, gate := range gates {
        if canEvalGate(gate, wires) {
            wires[wireName] = evaluateGate(gate, wires)
            delete(gates, wireName)
        }
    }
}

Part 2 revealed that this circuit was actually meant to be a binary adder, adding numbers represented by x-wires and y-wires to produce a result on the z-wires. However, four pairs of gate outputs were swapped. The key insight was recognizing the structure of a binary adder - it’s built from a series of half-adders and full-adders that process bits from least to most significant.

To find the swapped wires, I looked for gates that didn’t match the expected adder pattern:

// Find half adder gates for each bit position
m1 = find(fmt.Sprintf("x%s", n), fmt.Sprintf("y%s", n), "XOR", gateStrings)
n1 = find(fmt.Sprintf("x%s", n), fmt.Sprintf("y%s", n), "AND", gateStrings)

if carry != "" {
    // Full adder logic when we have a carry bit
    r1 = find(carry, m1, "AND", gateStrings)
    if r1 == "" {
        m1, n1 = n1, m1
        swapped = append(swapped, m1, n1)
        r1 = find(carry, m1, "AND", gateStrings)
    }
}

Thoughts

Today’s puzzle was a fascinating combination of digital logic and pattern recognition. While Part 1 was straightforward circuit simulation, Part 2’s realization that we were working with a binary adder added an interesting layer of complexity. The key to solving Part 2 was understanding how binary adders are constructed - they follow a very specific pattern of XOR and AND gates to handle each bit position and carry values.

The most elegant aspect was how the puzzle guided us from simple gate evaluation to discovering the underlying structure of a binary adder. Once you recognized the pattern of half-adders and full-adders, finding the swapped wires became much more manageable - they were the ones that broke this expected pattern.

I’d rate this a 6/10 difficulty. The circuit simulation itself wasn’t particularly complex, but recognizing and properly handling the binary adder structure in Part 2 required some careful thinking and debugging.

See you tomorrow!


Day 23: LAN Party

Today’s puzzle placed us in a network mapping scenario at Easter Bunny HQ, trying to analyze computer connections to locate a LAN party. We were given a list of computer connections in the format kh-tc (representing bidirectional connections between computers), and needed to analyze different types of connected groups.

For Part 1, we needed to find sets of three interconnected computers where at least one computer name started with ‘t’. I started by creating an adjacency list representation of the network:

func parseInput(input string) AdjList {
    adj := make(AdjList)
    lines := strings.Split(input, "\n")

    for _, line := range lines {
        nodes := strings.Split(line, "-")
        a, b := nodes[0], nodes[1]

        // Initialize maps if needed
        if adj[a] == nil {
            adj[a] = make(map[string]bool)
        }
        if adj[b] == nil {
            adj[b] = make(map[string]bool)
        }

        // Add bidirectional edge
        adj[a][b] = true
        adj[b][a] = true
    }
    return adj
}

The core of Part 1 involved finding triangles (sets of three interconnected nodes) in the graph. The trick was to avoid counting each triangle multiple times since we could find the same triangle starting from any of its three nodes:

// Inside findTriangles function
for node := range adj {
    for neighbor := range adj[node] {
        if seen[neighbor+node] {
            continue
        }
        seen[node+neighbor] = true

        // Check each neighbor of neighbor for triangles
        for non := range adj[neighbor] {
            if non == node {
                continue
            }
            if adj[non][node] && hasTPrefixNode(node, neighbor, non) {
                count++
            }
        }
    }
}
return count / 3  // Each triangle is counted 3 times

Part 2 changed the game completely - we needed to find the largest set of computers where every computer was directly connected to every other computer (in graph theory, this is known as a maximum clique). This required implementing the Bron-Kerbosch algorithm, which efficiently finds maximal cliques in a graph.

Thoughts

Today’s puzzle was a fascinating exploration of graph theory concepts. The transition from finding triangles in Part 1 to finding the maximum clique in Part 2 was particularly elegant. While my first attempt at Part 2 tried to brute force the solution by checking all possible combinations, that quickly proved impractical. The key insight was recognizing this as a classic maximum clique problem and implementing the Bron-Kerbosch algorithm.

The trickiest part was getting the triangle detection correct in Part 1 - it’s easy to either miss triangles or count them multiple times if you’re not careful with how you track seen combinations. The use of a seen map to prevent double-counting while still finding all valid triangles took some careful thinking to get right.

I’d rate this a 5/10 difficulty - while the concepts aren’t overly complex, implementing everything correctly, especially the Bron-Kerbosch algorithm for Part 2, required solid understanding of graph algorithms and careful attention to detail.

See you tomorrow!


Day 22: Monkey Market

Today’s puzzle placed us in a jungle marketplace, trying to predict pseudorandom number sequences used by monkeys to set prices for hiding spots. The challenge involved implementing a specific sequence generator and then analyzing patterns in the resulting numbers.

For Part 1, we needed to generate 2000 numbers for each starting value using a specific mixing and pruning process. The sequence generation was straightforward once you understood the steps:

func getSecretNum(num int) int {
    num = prune(mix(num*64, num))   // Step 1
    num = prune(mix(num/32, num))   // Step 2
    num = prune(mix(num*2048, num)) // Step 3
    return num
}

func mix(a, b int) int { return a ^ b }
func prune(a int) int  { return a % PRUNE_NUM }

Part 2 added an interesting twist - instead of looking at the full numbers, we only cared about their ones digits and needed to find sequences of changes that would maximize our banana collection. This required tracking a sliding window of differences between consecutive digits:

last4 := [4]int{10, 10, 10, 10} // start with impossible changes
for i := 0; i < 2000; i++ {
    prev := num % 10
    num = getSecretNum(num)
    curr := num % 10

    // shift the window and add new change
    last4[0] = last4[1]
    last4[1] = last4[2]
    last4[2] = last4[3]
    last4[3] = curr - prev
    ...
}

The key insight was using maps to efficiently track both the sequences we’ve seen (seen) and their cumulative values (sequences). This let us avoid counting the same sequence multiple times for a given starting number while still accumulating the total bananas we could collect.

Thoughts

Today’s puzzle was an interesting mix of number theory and pattern recognition. While implementing the sequence generator itself was straightforward, the real elegance came in Part 2’s approach to finding optimal price change sequences.

The solution ended up being much simpler than my first attempts, which tried to brute force all possible four-number sequences. Instead, by tracking the sequences we actually encounter and their resulting values, we let the problem’s natural constraints guide us to the answer.

I particularly enjoyed how the puzzle led us from thinking about full numbers in Part 1 to just their ones digits and changes in Part 2. It’s a nice reminder that sometimes you don’t need all the information - just the relevant bits.

I’d rate this a 3/10 difficulty. Once you understand what the puzzle is asking for, both parts follow naturally from each other and don’t require any particularly complex algorithms or data structures.

See you tomorrow!


Day 21: Keypad Conundrum

Today’s puzzle dropped us into a fascinating chain of robot control systems, where we had to help guide robots through a sequence of keypads to type door codes. What made this particularly interesting was the layered nature of control - we’re controlling a robot that controls another robot that controls another robot, and so on, all the way down to the final robot typing on a numeric keypad!

The puzzle involved two types of keypads: a numeric keypad (like an ATM) and a directional keypad (with arrows and an activate button). Each robot arm starts pointing at the ‘A’ button on its keypad, and we had to find the shortest possible sequence of button presses that would ultimately type codes like “029A” on the door’s numeric keypad.

For Part 1, I started by representing each keypad as a set of coordinates to track possible movements:

numMap := map[string]Coord{
    "A": {2, 0},
    "0": {1, 0},
    "1": {0, 1},
    "2": {1, 1},
    "3": {2, 1},
    // ... and so on
}

dirMap := map[string]Coord{
    "A": {2, 1},
    "^": {1, 1},
    "<": {0, 0},
    "v": {1, 0},
    ">": {2, 0},
}

The core challenge was handling the sequences of movements. Each step needed to translate into a series of directional presses to move the robot arm to the target button:

func getNumPadSequence(input []string, start string, numMap map[string]Coord) []string {
    curr := numMap[start]
    seq := []string{}

    for _, char := range input {
        dest := numMap[char]
        dx, dy := dest.x-curr.x, dest.y-curr.y

        // Build movement sequences based on relative positions
        if curr.y == 0 && dest.x == 0 {
            // Handle special case for bottom row
            seq = append(seq, buildVerticalMoves(dy)...)
            seq = append(seq, buildHorizontalMoves(dx)...)
        } else if dx < 0 {
            // Prioritize left movements
            seq = append(seq, buildHorizontalMoves(dx)...)
            seq = append(seq, buildVerticalMoves(dy)...)
        }
        // ... other movement cases

        curr = dest
        seq = append(seq, "A") // Press the button
    }
    return seq
}

Part 2 threw us for a loop by increasing the chain of robots from 2 to 25! This meant our sequence processing needed to handle many more layers of control. The key insight was using proper caching to avoid recalculating sequences we’d seen before:

func countSequences(input []string, maxRobots, robot int, cache map[string][]int, dirMap map[string]Coord) int {
    key := strings.Join(input, "")
    if val, ok := cache[key]; ok && robot <= len(val) && val[robot-1] != 0 {
        return val[robot-1]
    }

    if _, ok := cache[key]; !ok {
        cache[key] = make([]int, maxRobots)
    }
    // ... process sequences and update cache
}

Thoughts

Today’s puzzle was particularly devious in its complexity. What started as a seemingly straightforward pathfinding problem quickly evolved into a multi-layered sequence optimization challenge. The real difficulty wasn’t in finding a path through any single keypad - it was in managing the cascading effects of each button press through multiple layers of robot control.

The most enlightening moment came when I realized we could treat each sequence as a series of coordinate movements rather than trying to simulate the actual robot arm movements. This simplified the logic considerably and made the caching strategy much more effective.

Performance was a significant concern, especially in Part 2 with its 25 layers of control. The caching system was crucial here - without it, the combinatorial explosion of possible sequences would have made the problem intractable.

I’d rate this a 9/10 difficulty. While the core concepts weren’t particularly exotic, the layered nature of the problem and the need for efficient sequence handling made this one of the more challenging puzzles so far. Getting everything working correctly while maintaining good performance required careful thought and several iterations of the solution.

See you tomorrow!


Day 20: Race Condition

Today’s puzzle dropped us right outside a CPU for a race condition festival! We needed to help programs navigate through a twisting code path while utilizing a special cheating mechanic. The challenge involved pathfinding with an interesting twist - being able to pass through walls for a limited time.

For Part 1, programs could cheat exactly once during a race, disabling collision for up to 2 picoseconds. I started by creating some basic structures to represent the grid and tracking positions:

type Pos struct {
    row, col int
}

type Grid struct {
    cells    [][]byte
    start    Pos
    end      Pos
    rows     int
    cols     int
}

The core insight was that instead of trying to simulate the actual path with wall-passing, we could first find the base distances to every reachable point using BFS:

func bfs(g Grid) map[Pos]int {
    distances := make(map[Pos]int)
    q := []Pos{g.start}
    distances[g.start] = 0

    for len(q) > 0 {
        c := q[0]
        q = q[1:]

        for _, m := range moves {
            n := Pos{c.row + m.row, c.col + m.col}
            if !isInBounds(n, g) || g.cells[n.row][n.col] == '#' {
                continue
            }

            if _, visited := distances[n]; !visited {
                distances[n] = distances[c] + 1
                q = append(q, n)
            }
        }
    }

    return distances
}

Then, for each position we can reach, we look at all positions within the cheat distance (2 steps for Part 1) and calculate how much time we could save by jumping there. If we can reach that position normally through the maze, the difference between the normal path length and the cheat path length tells us how many picoseconds we save.

Part 2 simply increased the maximum cheat distance from 2 to 20 steps. The beauty of the solution is that it required almost no changes - just updating the maximum skip distance value:

maxSkipDist := 2 // default for part 1
if part == 2 {
    maxSkipDist = 20 // increased skip distance for part 2
}

Thoughts

Today’s puzzle was a fascinating exercise in thinking about pathfinding differently. Instead of trying to simulate the actual paths with wall-passing (which would be much more complex), we could solve it by finding normal path distances first and then calculating potential shortcuts.

The key insight was realizing we didn’t need to track the actual paths - we just needed to know if positions were reachable and how many steps it would take to reach them normally. Then, any position we could “jump” to that would save us steps represented a potential cheat opportunity.

Performance-wise, while we’re checking a lot of potential jump points, the solution remains efficient because we only need to run the BFS once to get all the base distances. This made scaling to Part 2’s larger jump distance trivial.

I’d rate this a 6/10 difficulty - while the core concept isn’t overwhelmingly complex, figuring out the right approach took some careful thinking. The problem description initially leads you toward trying to simulate actual paths with wall-passing, but finding the more elegant solution based on calculating potential shortcuts made the problem much more manageable.

See you tomorrow!


Day 19: Linen Layout

Today’s puzzle placed us at an onsen (Japanese hot spring) where we needed to help arrange towels with colored stripe patterns. Each towel has a pattern of stripes (white, blue, black, red, or green), and we needed to determine which designs could be created using combinations of available towel patterns.

For Part 1, we needed to determine which designs were possible using any combination of the available patterns. I used a recursive approach with memoization to solve this efficiently:

cache := make(map[string]int)

var solve func(string) int
solve = func(s string) int {
    if _, ok := cache[s]; !ok {
        if len(s) == 0 {
            return 1
        }
        res := 0
        for _, pattern := range patterns {
            if strings.HasPrefix(s, string(pattern)) {
                res += solve(s[len(pattern):])
            }
        }
        cache[s] = res
    }
    return cache[s]
}

for _, design := range designs {
    if solve(design) > 0 {
        out++
    }
}

Part 2 added an interesting twist - instead of just checking if a design was possible, we needed to count all the different ways each design could be created using the available patterns. Fortunately, our recursive solution already handled this case! The key change was simply summing up the number of ways each design could be made rather than just checking if it was possible.

for _, design := range designs {
    if part == 1 {
        if solve(design) > 0 {
            out++
        }
    } else {
        out += solve(design)
    }
}

The input parsing was straightforward, just splitting on newlines and commas:

func parseInput(input string) ([]string, []string) {
    s := strings.Split(input, "\n\n")
    return strings.Split(strings.TrimSpace(s[0]), ", "), strings.Split(strings.TrimSpace(s[1]), "\n")
}

Thoughts

Today’s puzzle was relatively straightforward once you recognized it as a string matching problem that could be solved recursively. The key insight was that we could break down each design into smaller subproblems by trying each available pattern at the start of the remaining string. Using memoization prevented us from recomputing the same substrings multiple times, making the solution quite efficient.

Part 2 was particularly elegant because it didn’t require any algorithmic changes - our solution was already counting all possible ways to create each design, we just needed to sum those counts instead of checking if they were greater than zero.

I’d rate this a 3/10 difficulty - while it required some careful thinking about recursion and memoization, the core concept was straightforward and the implementation was clean and concise.

See you tomorrow!


Day 18: RAM Run

Today’s puzzle placed us inside a computer’s memory space, trying to navigate through a grid while avoiding corrupted memory locations. The challenge involved pathfinding through a dynamically changing grid where bytes would “fall” and corrupt specific coordinates.

For Part 1, we needed to find the shortest path from (0,0) to (70,70) while avoiding corrupted spaces after the first 1024 bytes had fallen. I implemented this using Dijkstra’s algorithm with a priority queue for efficient path finding:

func findShortestPath(grid map[Coord]bool, start, end Coord) int {
    // init distances
    dist := make(map[Coord]int)
    dist[start] = 0

    // create a priority queue
    pq := make(PQueue, 0)
    heap.Init(&pq)
    heap.Push(&pq, &Item{coord: start, priority: 0})

    for pq.Len() > 0 {
        curr := heap.Pop(&pq).(*Item)

        // if we've reached the end, return the distance
        if curr.coord == end {
            return curr.priority
        }

        // if we've found a longer path to this point, skip
        if curr.priority > dist[curr.coord] {
            continue
        }

        // check all possible moves
        for _, dir := range dirs {
            next := Coord{curr.coord.x + dir.x, curr.coord.y + dir.y}

            if isValidMove(next, grid, end) {
                newDist := curr.priority + 1
                if d, exists := dist[next]; !exists || newDist < d {
                    dist[next] = newDist
                    heap.Push(&pq, &Item{coord: next, priority: newDist})
                }
            }
        }
    }

    return -1
}

Part 2 flipped the problem around - instead of finding a path through a fixed corruption pattern, we needed to find the first byte that would make the path to the exit impossible. This required a different approach using BFS to efficiently check for path existence after each byte falls:

func pathExists(grid map[Coord]bool, start, end Coord) bool {
    visited := make(map[Coord]bool)
    queue := []Coord{start}
    visited[start] = true

    for len(queue) > 0 {
        curr := queue[0]
        queue = queue[1:]

        if curr == end {
            return true
        }

        for _, dir := range dirs {
            next := Coord{curr.coord.x + dir.x, curr.y + dir.y}
            if isValidMove(next, grid, end) && !visited[next] {
                visited[next] = true
                queue = append(queue, next)
            }
        }
    }

    return false
}

Thoughts

Today’s puzzle was an interesting mix of pathfinding algorithms. The key insight was recognizing that while Dijkstra’s algorithm with a priority queue was ideal for finding the shortest path in Part 1, a simpler BFS would be more efficient for Part 2 where we only needed to check if any path existed.

The transition from Part 1 to Part 2 was particularly elegant - instead of processing all bytes at once and finding a path, we needed to process bytes one at a time and check path existence after each one. This made the problem more dynamic and required thinking about efficiency in a different way.

Performance-wise, using a map to track corrupted spaces and implementing both Dijkstra’s and BFS properly were crucial. The priority queue implementation helped Part 1 run efficiently, while the simpler BFS approach in Part 2 avoided unnecessary complexity when we just needed to check path existence.

I’d rate this a 5/10 difficulty - not overly complex conceptually, but requiring solid understanding of different pathfinding algorithms and careful implementation to handle both parts efficiently.

See you tomorrow!


Day 17: Chronospatial Computer

Today’s puzzle involved implementing a unique 3-bit computer emulator. The challenge was to simulate a computer with three registers (A, B, C) and eight different operations that use 3-bit instructions and operands. I found this particularly interesting because it combined bitwise operations with careful state management.

For Part 1, we needed to execute a program and collect its outputs. I started by creating structures to parse the initial state and program. This was pretty straight forward, just parsing the input.

The core of the solution is the program execution logic. Each instruction consists of an opcode and an operand, which we process in pairs:

func runProgram(a, b, c int, program []int) []int {
    out := make([]int, 0)

    for ip := 0; ip < len(program); ip += 2 {
        opcode, operand := program[ip], program[ip+1]

        // Process combo operand - this was tricky to understand at first
        value := operand
        switch operand {
        case 4: value = a
        case 5: value = b
        case 6: value = c
        }

        // Execute the instruction
        switch opcode {
        case 0: // adv - divide A by 2^value (implemented as right shift)
            a >>= value
        case 1: // bxl - XOR B with literal operand
            b ^= operand  // Note: uses operand, not value!
        case 2: // bst - set B to value mod 8
            b = value % 8
        case 3: // jnz - jump if A is not zero
            if a != 0 {
                ip = operand - 2
            }
        case 4: // bxc - XOR B with C
            b ^= c
        case 5: // out - output value mod 8
            out = append(out, value%8)
        case 6: // bdv - divide A by 2^value, store in B
            b = a >> value
        case 7: // cdv - divide A by 2^value, store in C
            c = a >> value
        }
    }
    return out
}

Part 2 threw an interesting twist - we needed to find the lowest value for register A that would make the program output itself. The approach involves building up the value bit by bit, using 3-bit chunks (since it’s a 3-bit computer):

// Part 2: Find lowest value of register A that makes program output itself
a = 0 // the initial value doesn't matter here
for pos := len(program) - 1; pos >= 0; pos-- {
    a <<= 3  // shift left by 3 bits for each position
    // Try values until we find one that outputs the correct sequence
    for !slices.Equal(runProgram(a, b, c, program), program[pos:]) {
        a++
    }
}

Thoughts

This puzzle was fascinating because it required understanding both low-level computer architecture concepts (instruction pointer, opcodes, operands) and bitwise operations. The trickiest parts were:

  1. Understanding the distinction between “combo” operands (which can reference register values) and literal operands
  2. Implementing the division operations using bit shifts (>>=)
  3. Getting the jump instruction (jnz) to work correctly with the instruction pointer
  4. Figuring out how to approach Part 2’s self-referential output requirement

The key insight for Part 2 was realizing we could build the solution incrementally, working backwards through the program and using 3-bit chunks to construct the final value. This works because each instruction in the output must be a 3-bit number.

I’d rate this a 7/10 difficulty - while the individual operations weren’t too complex, understanding the problem specification and figuring out the approach for Part 2 required careful thinking. The fact that we’re working with a 3-bit computer architecture also meant keeping track of several important details about how values should be processed and stored.

See you tomorrow!


Day 16: Reindeer Maze

Today’s puzzle involved helping reindeer navigate through a maze in the most efficient way possible. Each reindeer starts on a start tile (S) facing east and needs to reach an end tile (E). The twist is that movement costs vary dramatically - moving forward costs just 1 point, but rotating 90 degrees costs a whopping 1000 points!

For Part 1, I needed to find the path with the lowest possible score. I started by creating some basic types to represent the maze and movement:

type Point struct {
    x, y int
}

type Direction struct {
    dx, dy int
}

type Maze struct {
    grid  [][]string
    start Point
    end   Point
}

The core of the solution uses a sort of janky priority queue to always process the paths with lowest scores first. This ensures we find the optimal path:

for len(q) > 0 {
    sort.Slice(q, func(i, j int) bool {
        return q[i].score < q[j].score
    })

    current := q[0]
    q = q[1:]

    if current.pos == end {
        return current.score
    }

    // Try moving forward and turning...
}

Part 2 threw an interesting twist at us - instead of just finding the lowest score path, we needed to find how many tiles appear in ANY path that achieves the lowest score. This required tracking complete paths and being careful about how we count unique positions:

type QueueItem struct {
    pos   Point
    dir   int
    score int
    path  []Point  // Now we need to track the full path
}

The key insight was that we needed to use the same pathfinding approach twice - once to find the minimum score, and again to collect all paths that achieve that score. Then we could count unique positions across all those paths:

ls := findLowestScore(maze)
paths := findAllOptimalPaths(maze, ls)
return countUniqueTiles(paths)

Thoughts

Today’s puzzle was an interesting exercise in pathfinding with some unique constraints. The huge cost difference between moving forward and turning made for some interesting path optimization, and Part 2’s requirement to find ALL optimal paths added another layer of complexity.

A key insight that saved some headaches was realizing that since turns cost 1000 while moves cost 1, any path with an extra turn would always be worse than a path with fewer turns, regardless of how many extra forward moves it saved.

I’d rate this a 6/10 difficulty - while the core pathfinding concepts weren’t too exotic, getting all the pieces working together correctly required careful implementation and debugging. The transition from Part 1 to Part 2 was particularly interesting, requiring us to adapt our solution to track and analyze multiple optimal paths.

See you tomorrow!


Day 15: Warehouse Woes

Today’s puzzle was about helping lanternfish with their warehouse robot problem. We had to predict the movements of a malfunctioning robot pushing boxes around in a warehouse. The robot follows a sequence of moves (^, v, <, >), and when it encounters boxes, it attempts to push them - unless doing so would cause the robot or a box to hit a wall.

For Part 1, I started by creating structures to represent the warehouse and implementing recursive movement logic for pushing chains of boxes:

type Warehouse struct {
    moveSeq string
    boxes   map[Pair]struct{}
    robot   Pair
    walls   map[Pair]struct{}
    width   int
    height  int
}

func moveBoxes(w *Warehouse, box Pair, dir byte) {
    next := getNextPair(box, dir)
    _, isBox := w.boxes[next]
    if isBox {
        moveBoxes(w, next, dir)
    }
    delete(w.boxes, box)
    w.boxes[next] = struct{}{}
}

Part 2 threw a fascinating twist at us - everything except the robot was now twice as wide! Each original box (O) became a wide box ([]), and walls (#) became double walls (##). Instead of modifying my Part 1 solution, I created new structures specifically for Part 2:

type BigBox struct {
    left  Pair
    right Pair
}

type BigWarehouse struct {
    boxes    map[BigBox]struct{}
    boxParts map[Pair]BigBox
    robot    Pair
    // ... other fields
}

Thoughts

Today’s puzzle was a fascinating exercise in state management and spatial reasoning. The tricky part wasn’t just tracking positions - it was managing the complex interactions between boxes, especially in Part 2 where boxes could push multiple other boxes at once due to their width. The key insight was realizing that Part 2 needed a completely different approach rather than trying to modify Part 1. Using separate data structures for wide boxes and handling horizontal/vertical movements differently made the solution much clearer. Getting all the edge cases right and ensuring proper box chain movements took quite a bit of debugging.

I’d rate this an 8/10 difficulty. While the concept was straightforward, implementing all the movement rules correctly and handling the wide boxes in Part 2 required careful thought and significant refactoring.

See you tomorrow!


Day 14: Restroom Redoubt

Today’s puzzle was about predicting robot movement patterns in a bounded space. We needed to track multiple robots moving in straight lines that would wrap around the edges of the area, like a game of Snake meets Conway’s Game of Life.

For Part 1, we had to simulate the robots’ movement for 100 seconds and calculate a “safety factor” based on their final positions in each quadrant. I started by creating basic structures to represent the robots and their movement:

type Coordinate struct {
    x, y int
}

type Robot struct {
    pos Coordinate // position
    vel Coordinate // velocity
}

The core simulation logic involves moving each robot according to its velocity and handling the wrapping behavior when they hit the edges. The trick here was getting the modulo arithmetic right to handle negative positions correctly:

func moveRobots(robots []Robot) {
    for i := range robots {
        // Update position
        robots[i].pos.x += robots[i].vel.x
        robots[i].pos.y += robots[i].vel.y

        // Handle wrapping
        robots[i].pos.x = ((robots[i].pos.x % 101) + 101) % 101
        robots[i].pos.y = ((robots[i].pos.y % 103) + 103) % 103
    }
}

After simulating 100 seconds, we need to count robots in each quadrant, being careful to exclude robots on the middle lines.

Part 2 introduced an interesting twist - we needed to find when the robots would form a pattern resembling a Christmas tree. The key insight was looking for long horizontal lines of robots, which would form part of the tree pattern:

func hasLongHorizontalLine(robots []Robot) bool {
    // Count robots in each row
    rowCounts := make(map[int]map[int]bool)
    for i := 0; i < 103; i++ {
        rowCounts[i] = make(map[int]bool)
    }

    // Record robot positions by row
    for _, robot := range robots {
        rowCounts[robot.pos.y][robot.pos.x] = true
    }

    // Look for a row with many robots and long consecutive sequences
    for y := 0; y < 103; y++ {
        if len(rowCounts[y]) > 30 {
            consecutive := 0
            for x := 0; x < 100; x++ {
                if rowCounts[y][x] && rowCounts[y][x+1] {
                    consecutive++
                    if consecutive > 25 {
                        return true
                    }
                } else {
                    consecutive = 0
                }
            }
        }
    }

    return false
}

Thoughts

Today’s puzzle combined several interesting concepts - coordinate systems, modular arithmetic, and pattern recognition.

The key insight for Part 2 was realizing that the Christmas tree pattern would necessarily involve long horizontal lines of robots. Instead of trying to detect the entire tree shape, we could just look for these characteristic lines.

I’d rate this a 6/10 difficulty. While the core concepts weren’t overwhelmingly complex, getting all the details right - especially the wrapping behavior and pattern detection - required careful implementation and debugging. The transition from Part 1 to Part 2 was particularly interesting, requiring us to think about the problem in a completely different way.

See you tomorrow!


Day 13: Claw Contraption

Today’s puzzle involved solving a system of linear equations to determine optimal button press sequences for arcade claw machines. We needed to calculate the minimum number of tokens required to win prizes by manipulating a claw along X and Y coordinates using two buttons with different costs and movement patterns.

For Part 1, I initially implemented a brute force solution, checking all possible combinations of button presses up to 100:

type ClawMachine struct {
    btnA  Coordinate
    btnB  Coordinate
    prize Coordinate
}

func bruteCalculateTokens(machine *ClawMachine) int {
    for a := 0; a <= 100; a++ {
        for b := 0; b <= 100; b++ {
            xPos := a*machine.btnA.x + b*machine.btnB.x
            yPos := a*machine.btnA.y + b*machine.btnB.y

            if xPos == machine.prize.x && yPos == machine.prize.y {
                return (3 * a) + b  // 3 tokens per A press, 1 per B press
            }
        }
    }
    return 0
}

Part 2 introduced a significant twist - all prize coordinates were offset by 101310^{13} units, making the brute force approach impractical. This required a more mathematical solution using Cramer’s Rule to solve the system of linear equations:

func solveEquation(m *ClawMachine) (int, int) {
    // Using Cramer's Rule to solve:
    // ax*A + bx*B = px (x equation)
    // ay*A + by*B = py (y equation)

    d := m.btnA.x*m.btnB.y - m.btnA.y*m.btnB.x
    d1 := m.prize.x*m.btnB.y - m.prize.y*m.btnB.x
    d2 := m.prize.y*m.btnA.x - m.prize.x*m.btnA.y

    // Check if we have integer solutions
    if d1%d != 0 || d2%d != 0 {
        return 0, 0
    }

    return d1/d, d2/d
}

Thoughts

The key insight was recognizing this as a linear system that could be solved efficiently using linear algebra rather than exhaustive search. Having recently studied Cramer’s Rule for a linear algebra final proved unexpectedly useful for this implementation.

I’d rate this a 3/10 difficulty - the entire thing was fairly straight forward. The hardest part was figuring out how to calculate part 2. Once I remembered the linear algebra I was just studying, it was super easy.

See you tomorrow!


Day 12: Garden Groups

Today’s puzzle was all about analyzing connected regions in a grid - specifically, garden plots that form regions when they contain the same type of plant. The challenge involved calculating areas and boundaries of these regions, with some clever twists in how boundaries are counted between parts 1 and 2.

For Part 1, we needed to find connected regions of same-letter plants and calculate their area multiplied by their perimeter. I approached this using a flood-fill algorithm with some careful boundary tracking:

type Garden struct {
    symbol     rune
    size       int
    coords     map[Coord]bool
    boundaries map[Direction][]boundary
}

The core insight was that we needed to track not just the coordinates in each region, but also their boundaries in each cardinal direction. This made calculating perimeters much more straightforward:

func checkBoundary(grid [][]rune, pos Coord, symbol rune, dir Direction, garden *Garden) {
    var isEdge bool
    switch dir {
    case North:
        isEdge = pos.row == 0 || grid[pos.row-1][pos.col] != symbol
        // Similar cases for other directions...
    }

    if isEdge {
        garden.boundaries[dir] = append(garden.boundaries[dir], boundary{pos.row, pos.col, true})
    }
}

Part 2 introduced a fascinating twist - instead of using the perimeter length, we needed to count distinct “sides” of each region. This meant that a complex shape like an ‘E’ would have many more sides than its perimeter length might suggest. The trick was to modify how we count boundaries:

func prune(garden *Garden) {
    for dir := range garden.boundaries {
        if dir == North || dir == South {
            prune(garden.boundaries[dir], true)
        } else {
            prune(garden.boundaries[dir], false)
        }
    }
}

The most challenging aspect was handling regions that contained “holes” - other regions entirely enclosed within them. This required careful boundary counting to ensure we didn’t double-count shared edges while still properly accounting for interior boundaries.

Today’s solution is quite lengthy, so I haven’t included it here. You can find it on my GitHub.

Thoughts

Today’s puzzle was a fascinating exercise in geometric reasoning and boundary detection. The main challenges were:

  1. Implementing a robust flood-fill algorithm that could properly identify connected regions
  2. Tracking boundaries efficiently without double-counting shared edges
  3. Making the mental shift from perimeter-based counting in Part 1 to side-based counting in Part 2
  4. Handling edge cases like regions with holes or regions that touch only at corners

The key insight that really unlocked the solution was realizing that we needed to track boundaries per direction rather than just as a simple count. This made it much easier to handle both the perimeter calculation in Part 1 and the side counting in Part 2.

I’d rate this an 8/10 difficulty - while the core concepts weren’t overwhelmingly complex, getting all the boundary detection and counting logic correct was tough. The transition from Part 1 to Part 2 was particularly clever, forcing us to completely rethink how we counted boundaries while still using much of the same underlying region detection code.

See you tomorrow!


Day 11: Plutonian Pebbles

Today’s puzzle involved some physics-defying stones that transform based on simple rules whenever you blink. Each stone has a number, and depending on whether it’s 0, has an even number of digits, or neither, it will either become 1, split into two stones, or multiply by 2024.

For Part 1, we need to simulate 25 “blinks” and count how many stones we end up with. My first approach was pretty straightforward - just track each stone in a slice and apply the transformations:

func blink(stones []int) []int {
    newStones := make([]int, 0)
    for _, stone := range stones {
        if stone == 0 {
            newStones = append(newStones, 1)
        } else if countDigitsInNumber(stone)%2 == 0 {
            left, right := splitNumberIntoHalves(stone)
            newStones = append(newStones, left, right)
        } else {
            newStones = append(newStones, stone*2024)
        }
    }
    return newStones
}

This worked fine for Part 1, but when Part 2 asked for 75 iterations, the exponential growth of stones made this approach impractical. After some thinking, I realized I could optimize by tracking frequencies instead of individual stones. This led to recasting the problem in terms of stone counts:

func day11(input string, part int) int {
    s := utils.GetIntsInString(input)

    // count freq of each stone val
    scs := make(map[int]int)
    for _, s := range s {
        scs[s]++
    }

    iters := 25
    if part == 2 {
        iters = 75
    }

    // process stones
    for i := 0; i < iters; i++ {
        scs = blinkAllStones(scs)
    }

    return sumStones(scs)
}

The core transformation logic became simpler too, just handling one stone value at a time:

func blinkOnce(stone int) []int {
    if stone == 0 {
        return []int{1}
    }

    // convert to string to check digits
    s := strconv.Itoa(stone)
    if len(s)%2 == 0 {
        // split into two halves
        halfway := len(s) / 2
        l := utils.AtoiNoErr(s[:halfway])
        r := utils.AtoiNoErr(s[halfway:])
        return []int{l, r}
    }

    return []int{stone * 2024}
}

With the blinkAllStones function looking like this:

func blinkAllStones(stoneCount map[int]int) map[int]int {
    nc := make(map[int]int)

    for s, c := range stoneCount {
        // get new stones from transformation
        ns := blinkOnce(s)
        // add to counts, multiplied by how many of oritinal stones we had
        for _, n := range ns {
            nc[n] += c
        }
    }

    return nc
}

Thoughts

Today’s puzzle was all about finding the right way to represent the data. While the initial slice-based approach worked, the frequency map solution was much more elegant and efficient. It’s a good reminder that sometimes you need to step back and rethink your approach when scaling becomes an issue.

I’d rate this a 3/10 difficulty - the concept itself isn’t particularly challenging, but recognizing the need to optimize and implementing the frequency-based solution required some careful thinking.

See you tomorrow!


Day 10: Hoof It

In today’s Advent of Code puzzle, we were tasked with analyzing a topographic map of a hiking area on a floating island. The map indicated the height of each position using a scale from 0 (lowest) to 9 (highest). Our goal was to find all the trailheads and calculate two different metrics for each one.

For Part 1, we needed to find the trailheads (any position with a height of 0) and calculate a “score” for each one. The score was defined as the number of positions with a height of 9 that were reachable from that trailhead via a hiking trail. A hiking trail was any path that started at height 0, ended at height 9, and increased by exactly 1 in height at each step.

To solve this, I first parsed the input into a 2D grid representing the topographic map. I then implemented a breadth-first search (BFS) algorithm, starting from each trailhead and exploring all valid paths. Whenever I reached a height 9 position, I incremented the trailhead’s score. The sum of all the trailhead scores was the final answer for Part 1.

func bfs(grid [][]int, start [2]int) int {
    directions := [][2]int{{0, 1}, {1, 0}, {0, -1}, {-1, 0}}
    queue := [][2]int{start}
    visited := make(map[[2]int]bool)
    visited[start] = true
    score := 0

    for len(queue) > 0 {
        curr := queue[0]
        queue = queue[1:]

        for _, d := range directions {
            ni, nj := curr[0]+d[0], curr[1]+d[1]
            if ni >= 0 && ni < len(grid) && nj >= 0 && nj < len(grid[0]) {
                next := [2]int{ni, nj}
                if !visited[next] && grid[ni][nj] == grid[curr[0]][curr[1]]+1 {
                    visited[next] = true
                    queue = append(queue, next)
                    if grid[ni][nj] == 9 {
                        score++
                    }
                }
            }
        }
    }

    return score
}

Part 2 introduced a new metric called the “rating” of a trailhead. The rating was defined as the number of distinct hiking trails that began at that trailhead. To solve this, I used a recursive depth-first search (DFS) approach, tracking the visited positions and direction at each step to detect when a new valid trail was found.

func countPaths(grid [][]int, pos [2]int, visited map[[2]int]bool) int {
    if grid[pos[0]][pos[1]] == 9 {
        return 1 // reached the end of a valid path
    }

    directions := [][2]int{{0, 1}, {1, 0}, {0, -1}, {-1, 0}}
    visited[pos] = true
    totalPaths := 0

    for _, d := range directions {
        ni, nj := pos[0]+d[0], pos[1]+d[1]
        next := [2]int{ni, nj}

        // check bounds and ensure path increment is valid
        if ni >= 0 && ni < len(grid) && nj >= 0 && nj < len(grid[0]) {
            if !visited[next] && grid[ni][nj] == grid[pos[0]][pos[1]]+1 {
                totalPaths += countPaths(grid, next, visited)
            }
        }
    }

    visited[pos] = false // backtrack
    return totalPaths
}

Thoughts

Today’s puzzle was an interesting challenge in graph traversal and state management. While the core concepts weren’t overly complex, implementing the solutions correctly and efficiently required some careful thinking.

The trickiest part was definitely the transition from Part 1 to Part 2. Part 1’s BFS approach was relatively straightforward, but Part 2’s DFS-based path counting introduced a lot more complexity. Keeping track of the visited positions and direction changes to avoid duplicates was key, and it took some debugging to get the recursive logic right.

Performance-wise, both solutions are reasonably efficient, with the BFS approach in Part 1 running in O(n2)O(n^2) time and the DFS in Part 2 running in O(n\*3n)O(n \* 3^n) time (where nn is the number of positions in the grid). However, the memory usage for Part 2 could be a concern for larger inputs, as we’re recursively exploring all possible paths.

Overall, I’d rate this puzzle a 4/10 in difficulty. The concepts weren’t overly complex, but the implementation details and the transition from Part 1 to Part 2 added a decent challenge. It was a good exercise in graph algorithms and state management, and I enjoyed the problem-solving aspect of it.

See you tomorrow!


Day 9: Disk Fragmenter

Today’s puzzle was about helping an amphipod compact files on a disk. The challenge involved interpreting a dense disk map format and implementing file movement algorithms. What made this particularly interesting was the need to track file IDs and carefully manage block-by-block movements in Part 1, then switch to whole-file movements in Part 2.

For Part 1, we needed to move individual blocks from the rightmost files to the leftmost free spaces, maintaining file order and calculating a checksum based on positions. Here’s how I approached parsing the disk map and creating the initial state:

type Disk struct {
    Blocks []int // -1 represents free space, non-negative numbers represent file IDs
}

func parseDiskMap(input string) ([]int, []int) {
    files := make([]int, 0)
    spaces := make([]int, 0)

    for i := 0; i < len(input); i++ {
        size := utils.AtoiNoErr(string(input[i]))
        if i%2 == 0 {
            files = append(files, size)
        } else {
            spaces = append(spaces, size)
        }
    }
    return files, spaces
}

func createInitialDisk(files, spaces []int) Disk {
    var blocks []int
    fileID := 0

    for i := 0; i < len(files); i++ {
        // add file blocks
        for j := 0; j < files[i]; j++ {
            blocks = append(blocks, fileID)
        }
        fileID++

        // add free space
        if i < len(spaces) {
            for j := 0; j < spaces[i]; j++ {
                blocks = append(blocks, -1)
            }
        }
    }

    return Disk{Blocks: blocks}
}

The core of Part 1 involved finding the rightmost file block and the leftmost free space, then moving blocks one at a time:

func (d *Disk) findFirstFreeSpace() int {
    for i, block := range d.Blocks {
        if block == -1 {
            return i
        }
    }
    return -1
}

func (d *Disk) findLastFileBlock() int {
    for i := len(d.Blocks) - 1; i >= 0; i-- {
        if d.Blocks[i] != -1 {
            return i
        }
    }
    return -1
}

func (d *Disk) moveOneBlock(fromIndex, toIndex int) {
    fileID := d.Blocks[fromIndex]
    d.Blocks[fromIndex] = -1
    d.Blocks[toIndex] = fileID
}

Part 2 changed things up significantly - instead of moving individual blocks, we needed to move entire files at once, but only if there was enough continuous free space to the left. Files had to be processed in decreasing order of file ID. This required new helper functions to find file sizes and continuous free space:

func (d *Disk) findFileSize(pos int) int {
    if pos < 0 || pos >= len(d.Blocks) || d.Blocks[pos] == -1 {
        return 0
    }

    fileID := d.Blocks[pos]
    start := pos
    // Find start of file
    for start >= 0 && d.Blocks[start] == fileID {
        start--
    }
    start++

    end := pos
    // Find end of file
    for end < len(d.Blocks) && d.Blocks[end] == fileID {
        end++
    }

    return end - start
}

func (d *Disk) findFreeSpaceSize(pos int) int {
    size := 0
    for i := pos; i < len(d.Blocks) && d.Blocks[i] == -1; i++ {
        size++
    }
    return size
}

One of the trickiest parts was managing the movement of whole files while ensuring we didn’t overwrite other files in the process:

func (d *Disk) moveWholeFile(fromIndex, toIndex, size int) {
    fileID := d.Blocks[fromIndex]

    // Clear old location
    for i := 0; i < size; i++ {
        d.Blocks[fromIndex+i] = -1
    }

    // Place at new location
    for i := 0; i < size; i++ {
        d.Blocks[toIndex+i] = fileID
    }
}

Thoughts

This puzzle was quite challenging, primarily because it required careful state management and precise implementation of different movement strategies for each part. The main challenges were:

  1. Understanding the dense disk map format and correctly translating it into a workable representation
  2. Implementing the block-by-block movement logic for Part 1 while maintaining file integrity
  3. Making the mental shift to whole-file movements in Part 2
  4. Managing edge cases like ensuring continuous free space and processing files in the correct order

The biggest trap was assuming Part 2 would be more complex than Part 1 - while it required different logic, moving whole files at once was actually simpler in some ways than the careful block-by-block movements of Part 1. The key insight was realizing that Part 2’s constraints actually made the problem more deterministic, since files could only move if there was enough continuous free space to their left.

I’d rate this an 7/10 difficulty. While the concepts weren’t extremely complex, getting all the details right and handling both parts correctly required careful thought and implementation. The transition from Part 1 to Part 2 was particularly interesting, requiring a complete rethink of the movement strategy while still maintaining the core disk state management.

See you tomorrow!


Day 8: Resonant Collinearity

Today’s puzzle was about antenna signals and finding special points called antinodes in a 2D grid. Each antenna has a specific frequency (represented by a letter or digit), and antinodes form when specific geometric conditions are met between antennas of the same frequency.

For Part 1, antinodes occur at any point that’s collinear (in a straight line) with two antennas of the same frequency, but only when that point is twice as far from one antenna as it is from the other. I approached this by first parsing the grid and grouping antennas by frequency:

type Grid [][]rune
type Position struct {
    x, y int
}

func getFrequencies(grid Grid) map[rune][]Position {
    frequencies := make(map[rune][]Position)
    for y := 0; y < len(grid); y++ {
        for x := 0; x < len(grid[y]); x++ {
            char := grid[y][x]
            if char != '.' { // ignores filler spaces
                frequencies[char] = append(frequencies[char], Position{x, y})
            }
        }
    }
    return frequencies
}

The core logic for finding antinodes involves checking if points are collinear and satisfy the distance ratio requirement. To avoid floating-point precision issues, I used squared distances:

func isAntinode(p, a, b Position) bool {
    if !isCollinear(p, a, b) {
        return false
    }

    // Calculate squared distances
    dap := distanceSquared(a, p)
    dbp := distanceSquared(b, p)

    // Check if either distance is twice the other
    // Note: We compare squares, so it's 4 times instead of 2 times
    return dap == 4*dbp || dbp == 4*dap
}

Part 2 introduced “resonant harmonics” which simplified the rules - an antinode now occurs at any point that’s collinear with two antennas of the same frequency, regardless of distance. This actually made the calculation simpler, but required checking every grid position:

if harmonic {
    // Part 2: Check every point in the grid for collinearity
    for y := 0; y < len(grid); y++ {
        for x := 0; x < len(grid[y]); x++ {
            p := Position{x, y}
            // Check if this point is collinear with any pair of antennas
            for i := 0; i < len(pos); i++ {
                for j := i + 1; j < len(pos); j++ {
                    if isCollinear(p, pos[i], pos[j]) {
                        antinodes[p] = true
                        break
                    }
                }
            }
        }
    }
}

Thoughts

Today’s puzzle was an interesting mix of geometry and efficient grid searching. The main challenges were:

  1. Getting the collinearity check right without running into floating-point precision issues
  2. Efficiently calculating distances and ratios for Part 1
  3. Managing the transition between the complex distance rules in Part 1 and the simpler but more computationally intensive Part 2
  4. Keeping track of unique antinode positions when multiple antenna pairs might create antinodes at the same point

A key insight was using cross multiplication in the collinearity check to avoid division:

func isCollinear(p, a, b Position) bool {
    dx1 := a.x - p.x
    dy1 := a.y - p.y
    dx2 := b.x - p.x
    dy2 := b.y - p.y
    return dx1*dy2 == dx2*dy1
}

I’d rate this a 6/10 difficulty - while the geometric concepts weren’t too complex, implementing them correctly and efficiently required careful thinking and good problem-solving skills. Part 2’s twist was clever, forcing us to rethink our approach even though it technically simplified the problem.

See you tomorrow!


Day 7: Bridge Repair

Today’s puzzle was about evaluating arithmetic expressions with a twist. We needed to determine if sequences of numbers could be combined using operators to produce specific target values. The catch? Operators are evaluated strictly left-to-right (no precedence rules), and the numbers must be used in their given order.

For Part 1, we had to work with addition (+) and multiplication (*) operators. Each line in the input contained a target value followed by a sequence of numbers. For example:


190: 10 19 3267: 81 40 27 292: 11 6 16 20

I approached this using backtracking to try all possible operator combinations. First, I created a map to store the parsed input:

func parseInput(input string) map[int][]int {
    calibrations := make(map[int][]int)
    for _, line := range strings.Split(input, "\n") {
        parts := strings.Split(line, ": ")
        key, _ := strconv.Atoi(parts[0])

        for _, val := range strings.Split(parts[1], " ") {
            val, _ := strconv.Atoi(val)
            calibrations[key] = append(calibrations[key], val)
        }
    }

    return calibrations
}

The core validation logic uses backtracking to try each possible operator at each position:

var backtrack func(pos, curr int) bool
backtrack = func(pos, curr int) bool {
    // if we've used all the numbers, check if we hit the target
    if pos == len(vals) {
        return curr == target
    }

    // try addition
    if backtrack(pos+1, curr+vals[pos]) {
        return true
    }

    // try multiplication
    if backtrack(pos+1, curr*vals[pos]) {
        return true
    }

    return false
}

Part 2 introduced a new operator: concatenation (||). This operator joins the digits of two numbers together (e.g., 12 || 345 = 12345). This required adding string manipulation to our solution:

if tryConcat {
    // convert the current to a string, concat the next, convert back
    currStr := strconv.Itoa(curr)
    nextStr := strconv.Itoa(vals[pos])
    concatNum, _ := strconv.Atoi(currStr + nextStr)
    if backtrack(pos+1, concatNum) {
        return true
    }
}

The main function just calls the parseInput function then loops over each calibration and calls backtrack with the target value:

calibrations := parseInput(input)
validKeys := make([]int, 0)

for key, vals := range calibrations {
    if isCalibrationValid(key, vals, part == 2) {
        validKeys = append(validKeys, key)
    }
}

return utils.Sum(validKeys)

Thoughts

Today’s puzzle was a nice mix of arithmetic and string manipulation. The main challenge was handling the left-to-right evaluation requirement and implementing the backtracking solution efficiently. The addition of the concatenation operator in Part 2 added an interesting twist, requiring careful string manipulation and handling of larger numbers.

The key insight was recognizing this as a backtracking problem where we try different operators at each position. While we could potentially optimize further with caching or pruning, the current solution provides a good balance between clarity and performance.

I’d rate this a 2/10 difficulty - not overly complex, but requires some careful implementation. I saw some other people were having issues with their sum being higher than what a 32-bit integer can hold, luckily I didn’t have that issue, idk why lol.

See you tomorrow!

Day 6: Guard Gallivant

Today’s puzzle involved simulating a guard’s patrol path in a grid-based map and predicting their movements. The input consists of a grid where ^ represents the guard’s starting position (facing up) and # represents obstacles.

For Part 1, we needed to calculate how many distinct positions the guard would visit before leaving the mapped area. The guard follows a simple protocol:

  • If there’s an obstacle in front, turn right 90 degrees
  • Otherwise, move forward one step

My solution uses a Point and Direction system to track the guard’s movement through the grid:

type Point struct {
    x, y int
}

type Direction struct {
    dx, dy int
}

type Guard struct {
    pos Point
    dir Direction
}

Then we simulate the guard’s movement using a map to track visited positions and implement the protocol:

func simulateGuardPath(grid [][]byte, guard Guard, checkLoop bool) int {
    visited := make(map[Point]bool)
    visited[guard.pos] = true

    for {
        next := Point{
            x: guard.pos.x + guard.dir.dx,
            y: guard.pos.y + guard.dir.dy,
        }

        if !isInBounds(next, grid) {
            break
        }

        if grid[next.y][next.x] == '#' {
            guard.turnRight()
            continue
        }

        guard.pos = next
        visited[guard.pos] = true
    }

    return len(visited)
}

Part 2 flipped the problem around - we needed to find positions where placing a new obstacle would cause the guard to get stuck in a loop. This required modifying our simulation to detect repeated states (position + direction combinations):

type State struct {
    pos Point
    dir Direction
}

func simLoop(grid [][]byte, guard Guard) int {
    start := guard.pos
    count := 0

    for y := range grid {
        for x := range grid[y] {
            if grid[y][x] != '.' || (Point{x, y} == start) {
                continue
            }

            // create a copy of the grid with the new obstruction
            newGrid := make([][]byte, len(grid))
            for i := range grid {
                newGrid[i] = make([]byte, len(grid[i]))
                copy(newGrid[i], grid[i])
            }
            newGrid[y][x] = '#'

            if simulateGuardPath(newGrid, guard, true) > 0 {
                count++
            }
        }
    }
    return count
}

Thoughts

Today’s puzzle was a deceptively complex challenge in path simulation and state management. While the initial rules seemed straightforward, implementing them correctly and especially handling Part 2 required significant problem-solving and careful debugging.

The trickiest aspects included:

  1. Getting the turn mechanics exactly right - the subtle interplay between direction changes and movement needed precise implementation
  2. Understanding that Part 2 wasn’t just about finding places to put obstacles, but about finding positions that would create perfect loops
  3. Realizing that detecting loops required tracking the complete state (position AND direction) because the same position could be visited multiple times in different directions as part of a valid patrol route
  4. Managing the complexity of copying grids and running simulations for every possible obstacle position in Part 2 without missing edge cases

A key insight that took some time to reach was that the loop detection needed to look at the full state of the guard - just tracking visited positions wasn’t enough. This realization led to implementing the State struct to capture both position and direction:

type State struct {
    pos Point
    dir Direction
}

Performance-wise, Part 2 involves simulating the guard’s path for every possible obstacle position, which means we’re doing a lot of grid copying and path simulation. While there might be cleverer ways to optimize this by analyzing patterns in the guard’s movement, sometimes the straightforward approach, even if computationally intensive, is the most reliable path to a solution.

I’d rate this a 6/10 difficulty - while the core concepts aren’t overly complex, getting all the pieces working together correctly required solid problem-solving skills and careful debugging. The leap from Part 1 to Part 2 was particularly challenging, requiring a fundamental shift in how we thought about the problem and tracked the guard’s state

See you tomorrow!


Day 5: Print Queue

Today’s puzzle involved managing a print queue for safety manual updates using topological sorting. The input consisted of rules about page ordering (in the form X|Y meaning page X must be printed before page Y) and sequences of pages that needed to be printed.

Part 1 required checking if given sequences of pages satisfied all the applicable ordering rules. For example, if we had rules like 47|53 and 75|29, we needed to verify that page 47 came before 53 and page 75 came before 29 in each sequence. My solution uses a directed graph to represent these dependencies:

func buildGraph(rules []Rule) Graph {
    graph := make(Graph)
    for _, rule := range rules {
        graph[rule.before] = append(graph[rule.before], rule.after)
    }
    return graph
}

Then for each squence, I check if it satisfies all applicable rules by tracking page positions and verifying dependencies:

func isValidSequence(seq []int, graph Graph) bool {
    positions := make(map[int]int)
    for i, page := range seq {
        positions[page] = i
    }

    for i := 0; i < len(seq); i++ {
        page := seq[i]
        if after, exists := graph[page]; exists {
            for _, mustBeAfter := range after {
                if pos, exists := positions[mustBeAfter]; exists {
                    if pos <= i {
                        return false
                    }
                }
            }
        }
    }
    return true
}

Part 2 flipped the problem around - instead of just validating sequences, we needed to correctly order the invalid sequences using Kahn’s topological sorting algorithm. This required tracking in-degrees for each node and carefully managing the order of processing to match the expected output:

func topologicalSort(pages []int, fullGraph Graph) []int {
    // Create subgraph with only relevant pages...
    nodes := make(map[int]*Node)
    // Build dependencies...
    // Process queue in correct order...
    // Return sorted sequence...
}

Topological sort is a long algorithm in Go, so I didn’t write it all here, but the full solution is available on my GitHub.

Thoughts

Today’s puzzle was a nice application of graph theory and topological sorting. While the core concept wasn’t too complex, getting all the edge cases right and ensuring the correct ordering in Part 2 required careful implementation.

The trickiest part was handling Part 2’s requirement to generate valid orderings. Making sure the topological sort produced the correct ordering when multiple valid orderings were possible took some debugging to get right. The key insight was realizing we needed to carefully manage the queue order to ensure deterministic output.

The problem was also a good reminder of how important it is to properly model the problem space - representing the dependencies as a directed graph made both parts much more straightforward to solve.

I’d rate this a 5/10 difficulty - not overly complex conceptually, but requiring solid understanding of graph algorithms and attention to detail in the implementation.

See you tomorrow!


Today’s puzzle was an interesting twist on the classic word search problem. Instead of searching for a single instance of “XMAS”, we needed to find all possible occurrences in Part 1, and then pivot to finding X-shaped “MAS” patterns in Part 2.

There’s quite a bit of code for today, so I’m not going to include it all here. As always, the full solution is available on my GitHub.

For Part 1, I implemented a grid search that looks for “XMAS” in all eight possible directions (horizontal, vertical, and diagonal) from any starting position. The approach uses direction vectors to check each possible orientation:

var dirs = []Direction{
    {0, 1},   // right
    {0, -1},  // left
    {1, 0},   // down
    {-1, 0},  // up
    {1, 1},   // down-right
    {1, -1},  // down-left
    {-1, 1},  // up-right
    {-1, -1}, // up-left
}

Part 2 required a different approach since we needed to find X-shaped patterns where each diagonal spells either “MAS” or “SAM”. I optimized this by only checking positions that contain ‘A’ (the center of the X) and then validating the diagonals:

if grid[row][col] == 'A' {
    if checkXMASCross(grid, row, col, rows, cols) {
        count++
    }
}

Thoughts

Today’s puzzle was quite the step up from the last one (in my opinion). While grid traversal and pattern matchiing might sound straightforward, implementing both parts correctly required careful thought and precise implementation.

The part that really made thhis puzzle tricky was handling all the edge cases correcly. In part 1, ensuring the pattern checking worked in all eight directions while staying within bounds took some debugging to get right. The pattern validation logic needed to be both efficient and thorough since we were looking for all possible instances of “XMAS”.

Part 2’s X-pattern search was even more devious - it fundamentally changed how we needed to think about the problem. The realization that we could optimize by starting from ‘A’ positions helped, but validating the diagonal patterns correctly and handling the fact that “MAS” could be reversed added extra complexity. Plus, the way the puzzle description led us to initially think about it in terms of overlapping “MAS” strings, when really it was about finding X-shaped patterns, was quite clever.

Performance-wise, while both parts run quickly thanks to the optimized starting positions (X/A), getting there required several refactoring attempts to handle all cases correctly while maintaining efficiency.

I’d rate this a 5/10 difficulty - definitely more challenging than the first three days and requiring some solid problem-solving skills to implement correctly. The misdirection in Part 2 and the need for precise pattern matching logic made this a satisfyingly complex puzzle.

See you tomorrow!


Day 3: Mull it Over

Today’s puzzle was about parsing corrupted computer memory and extracting valid multiplication instructions. The input consists of strings containing mul(x,y) patterns mixed with various invalid characters and conditionals. Essentially it’s a whole bunch of regex.

For part 1, you must find the number of valid mul instructions (where xx and yy are 1-3 digit integers) and sum up their products. For example:

xmul(2,4)mul[3,7]a2mul(4,3)

Would be (2*4) + (4*3) = 20.

Only the properly formatted mul(x,y) patterns count. My solution uses regex to extract these patters:

func getMulNums(input string) [][]int {
    muls := make([][]int, 0)
    re := regexp.MustCompile(`mul\((\d+),(\d+)\)`)
    matches := re.FindAllStringSubmatchIndex(input, -1)

    // go on to process each match ...
}

You then just need to sum up the products of each pair.

Part 2 introduces conditional logic with do() and don't() instructions that enable or disable multiplication operations. I modified the regex pattern to capture these new instructions and added a boolean flag to track whether multiplications should be processed.

For this part, I just modified the regex pattern to capture these new instructions, added a conditional boolean to the getMulNums function, and then added an allowMul boolean which (if the conditional is true) would process the do() and dont() instructions.

func getMulNums(input string, conditional bool) [][]int {
    // ...
    re := regexp.MustCompile(`mul\(\d+,\d+\)|do\(\)|don't\(\)`) // new regex
    // ...

    allowMul := true

	for _, match := range matches {
		op := input[match[0]:match[1]]

        // this part only happens if we are doing part 2
		if conditional {
			if op == "do()" {
				allowMul = true
			} else if op == "don't()" {
				allowMul = false
			}
		}

        // slightly modified from before due to the new conditional
		if allowMul && strings.HasPrefix(op, "mul") {
			muls = append(muls, extractNumsFromMul(op))
		}
	}

	return muls
}

Aside from these updates, I also made a extractNumsFromMul function to make my code a bit easier to read:

func extractNumsFromMul(op string) []int {
	numRe := regexp.MustCompile(`(\d+)`)
	nums := numRe.FindAllStringSubmatch(op, -1)
	x, _ := strconv.Atoi(nums[0][1])
	y, _ := strconv.Atoi(nums[1][1])
	return []int{x, y}
}

As always, the full solution is available on my GitHub.

Thoughts

Today was an interesting exercise in string parsing and regex. While the core logic wasn’t too complex, getting the regex patterns right and handling the conditional state properly required some careful thinking.

The biggest challenge was probably crafting the regex pattern to capture both the multiplication instructions and the conditional statements while ignoring all the noise characters.

I’d rate this a 2/10 difficulty - not particularly difficult, just some clever regex.

See you tomorrow!


Day 2: Red-Nosed Reports

Not too bad today to be honest. I got both parts done in < 45 minutes after I started, which included an algorithm rewrite is not too bad for me.

Today’s problem was about processing a Rm×n\mathbb{R}^{m\times n} matrix of integers. Each row is a list of “levels”. For each row, you need to verify it passes a few rules:

  1. The row can only increase OR decrease, no switching part way through.
  2. The difference between two consecutive levels must be between 1 and 3 (inclusive).

For part 1, you need to sum up the number of passing rows (rows that pass the above rules). For example,

7 6 4 2 1
1 2 7 8 9
9 7 6 2 1
1 3 2 4 5
8 6 4 4 1
1 3 6 7 9

Has 2 safe rows (rows 1 and 6). Part 1 is pretty simple, once you have a working algorithm. Mine looks like this:

func isValidRow(row []int) bool {
    if len(row) < 2 {
        return true
    }

    // determine the initial direction from the first 2 nums
    isInc := row[1] > row[0]

    // check first pair meets difference criteria
    initialDiff := utils.Abs(row[1] - row[0])
    if initialDiff < 1 || initialDiff > 3 {
        return false
    }

    // check remaining pairs
    for i := 1; i < len(row)-1; i++ {
        curr, next := row[i], row[i+1]
        diff := next - curr

        // if direction changes, sequence is invalid
        if (diff > 0) != isInc {
            return false
        }

        // check the diff is within bounds
        if utils.Abs(diff) < 1 || utils.Abs(diff) > 3 {
            return false
        }
    }
    return true
}

Note: utils.Abs is a helper function that can take in int rather than the math.Abs function which takes in float64.

Go through each row in the matrix, if it is valid, increment the counter. Easy peasy.

For part 2, if a row is invalid, you need to now check if you can make it valid by removing any 1 level from the row. For example, the above matrix now has 4 safe rows (rows 1, 4, 5, and 6). I had trouble figuring out how to do this efficiently, so it took some time, but I eventually came up with this:

func canBecomeValid(row []int) bool {
    // try removing each num and check if the resulting row is valid
    for i := range row {
        // create a new slice without the current num
        newRow := make([]int, 0, len(row)-1)
        newRow = append(newRow, row[:i]...)
        newRow = append(newRow, row[i+1:]...)

        if isValidRow(newRow) {
            return true
        }
    }
    return false
}

This just checks if a given row can become valid by removing any 1 level from the row, it tests every number and then returns true if it can. You then modify the main function a bit for part 2 to check if either isValidRow or canBecomeValid returns true, if so, increment the counter. Done!

Thoughts

Today was a pretty cool problem. It seems trivial at first but then there are a few things that catch you off guard. Still, relatively easy though, it will get harder the further into the month we go.

This algorithm would be much easier in Python using zip and even day 1 would be easier with this same function. I may write a Zip2 function in Go as a util in the future since it’s 2 days in a row that my life would be so much easier. That said, this algorithm works and is still pretty fast. Both parts ran in < 600 µs on my server. The point of AoC (for me) is not to write the most efficient algorithm possible and not to get on the leaderboard, but to learn and improve. From doing AoC for the past few years, I’ve learned a lot about algorithms and data structures, and in my book, that’s a win.

Overall, I’d give this a 3/10 difficulty, simply because it was a bit of work to find a semi-efficient solution for part 2.

See you tomorrow!


Day 1: Historian Hysteria

First day of AoC 2024! This year I decided to do all my solutions in Go.

Today’s problem was pretty easy. You’re given a list of integers, 2 in each row (separated by 3 spaces).

For part 1, you need to pair the smallest number on the left with the smallest on the right, then the second smallest, then the third, and so on. For example,

1   4
3   1
4   3
8   2

Would be paired as (1, 1), (3, 2), (4, 3), (8, 4). Then, you need to sum up the differences between each pair. With the above example, the sum would be 0 + 1 + 1 + 4 = 6.

My approach for part 1 was to first separate the numbers into two lists left and right, sort them, and iterate over the left list calculating the absolute difference between each pair. Since we know the right list is the same length as the left, we can just iterate over one and associate the index with the other.

left := make([]int, 0)
right := make([]int, 0)

// parse input ...

sort.Ints(left)
sort.Ints(right)

for i := 0; i < len(left); i++ {
    out += int(math.Abs(float64(left[i] - right[i])))
}

Pretty easy!

For part 2, you need to multiply each number in the left list by the number of times it appears in the right list and sum up the results. For the above example, the result would be (1 * 1) + (3 * 1) + (4 * 1) + (8 * 1) = 16.

My approach in part 2 was to make a hashmap of the right list associating each number with its frequency, then iterate over the left list and multiply each number by its frequency in the hashmap.

freq := make(map[int]int)
for _, num := range right {
    freq[num]++
}

for _, num := range left {
    if f, exists := freq[num]; exists {
        out += num * f
    }
}

For this part, you could use a nested for loop, but I wanted it to be a bit faster. The nested for loop is O(n^2), whereas the hashmap is O(n).

Thoughts

Super easy today. Looking back on last year’s day 1 problem, this was a walk in the park. I had fun optimizing my solution, and yes, I know it’s not the most efficient algorithm, but it’s easy to understand and it works.

Overall, today was pretty easy, about a 1/10 difficulty.

See you tomorrow!