mr-kaffee
Peter Wieland
Github: mr-kaffee,
Strava: Peter Wieland
mr-kaffee |
I am a curious amateur coder (lots of Java, MATLAB, recently Rust), I like cycling on and off roads and much more …
Maybe I’ll try out some Ruby this year. At least a hello world is done ;)
The actual solution resides in solution.rb
It will define classes, methods and similar to calculate the solution, e.g.,
def say_hello(name = "World")
"Hello, #{name}!"
end
And a section to run the actual solution and measure time, like
if __FILE__ == $0
b = Benchmark.measure { puts say_hello }
puts "Solved in #{b.real}s"
end
The part if FILE == $0
is taken from Ruby in Twenty Minutes
The solution is executed with ruby solution.rb
To test my code, there will be a solution_test.rb
file, which defines a test class derived from Test::Unit::TestCase
:
class TestSolution < Test::Unit::TestCase
def test_say_hello
assert_equal "Hello, World!", say_hello
assert_equal "Hello, Eric Wastl!", say_hello("Eric Wastl")
end
end
Tests are executed with ruby solution_test.rb
It’ll be another day of Rust solutions for the 2021 edition of Advent of Code
Generally, my solutions will contain a src/main.rs
file which reads the input from input.txt
,
calls the solution functions and measures time.
The actual solution will be implemented in src/lib.rs
file.
The lib.rs
file also contains the tests in a separate submodule. I will use this for test-driven
development, e.g., based on the examples given in the puzzles.
Run solution with cargo run --release
Run tests with cargo test
or cargo test --release
I wanted to keep the overall solution time as small as possible and after two thirds of the puzzles were done, I set my personal target to 500ms. The biggest challange was to reduce the runtime to solve day23. After several iterations, I finally came down to slightly above 200ms. With this, some improvements on day 19, and some build optimizations, my overall solution time is now just below 500ms on my machine:
Total time for 25 days: 487.5843ms (19.503372ms per day)
My second challange was to only use out of the box Rust features. This one I achieved.
My day00
solution this year will run all my solutions. The code looks as follows:
const DAYS: &[fn()] = &[
day00::solve,
day01::solve,
day02::solve,
day03::solve,
day04::solve,
day05::solve,
day06::solve,
day07::solve,
day08::solve,
day09::solve,
day10::solve,
day11::solve,
day12::solve,
day13::solve,
day14::solve,
day15::solve,
day16::solve,
day17::solve,
day18::solve,
day19::solve,
day20::solve,
day21::solve,
day22::solve,
day23::solve,
day24::solve,
day25::solve,
];
pub fn solve(indices: impl Iterator<Item = usize>) {
let timer = Instant::now();
let mut count = 0;
for idx in indices {
DAYS[idx]();
count += 1;
}
let elapsed = timer.elapsed();
println!(
"Total time for {} days: {:?} ({:?} per day)",
count,
elapsed,
elapsed.checked_div(count).unwrap()
);
}
To run all solutions, use cargo run --release -- --from 1 --to 25
. The command line options --from <day>
and --to <day>
specify the range of days to run (bounds included). The default is to run all days from 1 to 25 (inclusive). If a day is not yet implemented, the program panics.
Rust solution to AoC|2021|01.
Parse the input
/// parse each line into a number
pub fn parse(content: &str) -> Vec<usize> {
content
.lines()
.map(|line| line.parse().expect("Could not parse line"))
.collect()
}
And count how often consecutive inputs increase (test is part of documentation)
/// count how often consecutive numbers increase
///
/// # Examples
/// ```
/// let data: Vec<usize> = vec![199, 200, 208, 210, 200, 207, 240, 269, 260, 263];
/// assert_eq!(7, mr_kaffee_2021_01::count_increase(&data));
/// ```
pub fn count_increase(data: &[usize]) -> usize {
data.iter()
.zip(data[1..].iter())
.filter(|(a, b)| b > a)
.count()
}
Create sliding sums over three consecutive numbers (again with test as part of documentation)
/// calculate sliding sums over three elements
///
/// # Examples
///
/// ```
/// let data: Vec<usize> = vec![199, 200, 208, 210, 200, 207, 240, 269, 260, 263];
/// let sums: Vec<usize> = vec![607, 618, 618, 617, 647, 716, 769, 792];
/// assert_eq!(sums, mr_kaffee_2021_01::sliding_sums(&data));
/// ```
pub fn sliding_sums(data: &[usize]) -> Vec<usize> {
data.iter()
.zip(data[1..].iter())
.zip(data[2..].iter())
.map(|((a, b), c)| a + b + c)
.collect()
}
and use function count_increase
from part 1 again
The zip
function on iterators is useful.
Later, looking at other solutions, I felt stupid: Obviously
if and only if a[k] + a[k + 1] + a[k + 2] < a[k + 1] + a[k + 2] + a[k + 3]
and thus there is no need at all to calculate the sliding sums for part 2.a[k] < a[k + 3]
This is a generic solution with
for part 1 and offset = 1
for part 2:offset = 3
/// count how often numbers with distance `offset` increase
///
/// # Examples
/// ```
/// let data: Vec<usize> = vec![199, 200, 208, 210, 200, 207, 240, 269, 260, 263];
/// assert_eq!(7, mr_kaffee_2021_01::count_increase_with_offset(&data, 1));
/// assert_eq!(5, mr_kaffee_2021_01::count_increase_with_offset(&data, 3));
/// ```
pub fn count_increase_with_offset(data: &[usize], offset: usize) -> usize {
data.iter()
.zip(data[offset..].iter())
.filter(|(a, b)| b > a)
.count()
}
Ruby solution to AoC|2021|02.
Generic function to calculate the position. Takes the input string, an initial value and a hash of lambdas updating the position
The function iterates through the lines of the input
. Each line
is split in the command part cmd
and the value part v
(single space as separator). The command is used to select a lambda from the steps
hash which is then called with the accumulator acc
and the value v
to udpate the accumulator.
def calc_position(input, init, steps)
input.split("\n")
.map { |line| line.split(' ') }
.inject(init) { |acc, cmd| steps[cmd.first].call acc, cmd.last.to_i }
end
For part 1, this function is called as follows:
def calc_position_1(input)
calc_position(input, [0, 0], {
'up' => ->(acc, v) { [acc[0], acc[1] - v] },
'down' => ->(acc, v) { [acc[0], acc[1] + v] },
'forward' => ->(acc, v) { [acc[0] + v, acc[1]] }
})
end
For part 2, the aim has to be considered as additional value and the update rules are changed:
def calc_position_2(input)
calc_position(input, [0, 0, 0], {
'up' => ->(acc, v) { [acc[0], acc[1], acc[2] - v] },
'down' => ->(acc, v) { [acc[0], acc[1], acc[2] + v] },
'forward' => ->(acc, v) { [acc[0] + v, acc[1] + acc[2] * v, acc[2]] }
})
end
class TestSolution < Test::Unit::TestCase
CONTENT = %(forward 5
down 5
forward 8
up 3
down 8
forward 2).freeze
def test_calc_position_1
assert_equal [15, 10], calc_position_1(CONTENT)
end
def test_calc_position_2
assert_equal [15, 60, 10], calc_position_2(CONTENT)
end
end
Rust solution to AoC|2021|02.
The most difficult part today was to parse the input. My rust knowledge obviously is slightly rusty…
Eventually I created an enum with a parse
function as follows:
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord)]
pub enum Command {
Up(isize),
Down(isize),
Forward(isize),
}
impl Command {
/// parse command
///
/// # Examples
/// ```
/// # use mr_kaffee_2021_02::*;
/// assert_eq!(Command::Up(5), Command::parse("up 5"));
/// assert_eq!(Command::Down(6), Command::parse("down 6"));
/// assert_eq!(Command::Forward(-2), Command::parse("forward -2"));
/// ```
///
/// ```should_panic
/// # use mr_kaffee_2021_02::*;
/// // this will panic since the first part is not a valid command
/// let cmd = Command::parse("invalid-command 7");
/// ```
///
/// ```should_panic
/// # use mr_kaffee_2021_02::*;
/// // this will panic since the two parts are not separated by a single blank
/// let cmd = Command::parse("up\t7");
/// ```
///
/// ```should_panic
/// # use mr_kaffee_2021_02::*;
/// // this will panic since the second part is not a number
/// let cmd = Command::parse("down not-a-number");
/// ```
pub fn parse(line: &str) -> Self {
let (cmd, v) = line.split_once(' ').expect("Invalid line");
let v = v.parse().expect("Could not parse value");
match cmd {
"up" => Command::Up(v),
"down" => Command::Down(v),
"forward" => Command::Forward(v),
_ => panic!("Unexpected command"),
}
}
}
The solution then is done with a simple fold operation on an iterator accumulating depth (y) and horizontal position (x)
/// Calculate positions
///
/// # Example
/// ```
/// # use mr_kaffee_2021_02::*;
/// assert_eq!((5, 0), calc_position("forward 5"));
/// assert_eq!((0, 5), calc_position("down 5"));
/// assert_eq!((3, -5), calc_position("up 5\nforward 3"));
/// assert_eq!((0, 0), calc_position(""));
/// ```
pub fn calc_position(input: &str) -> (isize, isize) {
input
.lines()
.map(|line| Command::parse(line))
.fold((0, 0), |(x, y), cmd| match cmd {
Command::Up(v) => (x, y - v),
Command::Down(v) => (x, y + v),
Command::Forward(v) => (x + v, y),
})
}
The second part was an easy extension to the first part. Just add a third accumulator state for the aim (small trap: my horizontal position is called 'x' which is not the same as the 'X' in the puzzle description)
/// Calculate positions with aim
///
/// # Example
/// ```
/// # use mr_kaffee_2021_02::*;
/// assert_eq!((5, 0, 0), calc_position_with_aim("forward 5"));
/// assert_eq!((5, 0, 5), calc_position_with_aim("forward 5\ndown 5"));
/// assert_eq!((13, 40, 5), calc_position_with_aim("forward 5\ndown 5\nforward 8"));
/// ```
pub fn calc_position_with_aim(input: &str) -> (isize, isize, isize) {
input
.lines()
.map(|line| Command::parse(line))
.fold((0, 0, 0), |(x, y, aim), cmd| match cmd {
Command::Up(v) => (x, y, aim - v),
Command::Down(v) => (x, y, aim + v),
Command::Forward(v) => (x + v, y + aim * v, aim),
})
}
Parsing simple inputs does not require regular expressions & creating enums with parse functions leads to a bit more code but looks much prettier
Rust solution to AoC|2021|03.
As always, first challenge is to parse the input. I decided to read the input into integers. That caused some headaches later on to get the bit ordering correct. After I realized that the line length in the example differs from the line length in the puzzle input, I also return the length from my parse function.
/// parse input into integers
/// return bit-length of values in addition
pub fn parse(input: &str) -> (Vec<usize>, usize) {
let len = input.lines().next().expect("No lines").len();
let values = input
.lines()
.map(|line| usize::from_str_radix(line, 2).expect("Could not parse line"))
.collect();
(values, len)
}
Part 1 is then solved with the help of a function that counts for every position, how often the bit is set in the input. Another function then calculates epsilon by setting all the bits in the positions where most of the input values have bits set and gamma as the inverse
/// Count how often bits are set in the values.
///
/// returns vector of counts with first element indicating how often least significant
/// bit is set and last element indicating how often most significant bit is set
pub fn count_all_ones(values: &[usize], len: usize) -> Vec<usize> {
values.iter().fold(vec![0; len], |counts, value| {
counts
.iter()
.enumerate()
.map(|(k, one)| *one + ((*value >> k) & 1))
.collect()
})
}
/// calculate the gamma and epsilon values
///
/// gamma is the value of all the most common bits
/// epsilon is the value of all the least common bits
///
/// returns ``(gamma, epsilon, gamma * epsilon)``
pub fn calc_gamma_epsilon(values: &[usize], len: usize) -> (usize, usize, usize) {
let counts = count_all_ones(values, len);
let gamma = (0..len).fold(0, |gamma, bit| {
gamma + (((2 * counts[bit] >= values.len()) as usize) << bit)
});
let epsilon = !gamma & ((1 << len) - 1);
(gamma, epsilon, gamma * epsilon)
}
As the puzzle states, part 2 is the trickier one.
I wrote a filter function to reduce the list of values step by step until only one is left. Whether oxygen or co2 ratings are calculated is controlled with a
flag. The filter function uses a new invert
count_ones
function that only counts the specific bit we are interested in
/// Count how often specific bit is set in the values.
pub fn count_ones(values: &[usize], bit: usize) -> usize {
values
.iter()
.fold(0, |count, value| count + ((value >> bit) & 1))
}
/// filter out all the values whos given bit is not equal to the most common
/// bit in that position.
///
/// If invert is true, do the opposite and keep all the values whos given bit
/// is not equal to the most common bit in that position.
pub fn filter(values: Vec<usize>, bit: usize, invert: bool) -> Vec<usize> {
// determin expected value (xor with invert)
let exp = ((2 * count_ones(&values, bit) >= values.len()) ^ invert) as usize;
values
.into_iter()
.filter(|value| (*value >> bit) & 1 == exp)
.collect()
}
/// calc oxygen (``invert = false``) or co2 (``invert = true``) ratings
pub fn calc_rating(values: &[usize], len: usize, invert: bool) -> usize {
let mut values = values.to_vec();
let mut bit = Some(len - 1);
while values.len() > 1 && bit.is_some() {
values = filter(values, bit.unwrap(), invert);
bit = match bit {
Some(v) if v > 0 => Some(v - 1),
_ => None,
};
}
*values.first().expect("No value left")
}
/// calc oxygen and co2 ratings
///
/// returns ``(oxygen, co2, oxygen * co2)``
pub fn calc_ratings(vals: &[usize], len: usize) -> (usize, usize, usize) {
let oxygen = calc_rating(vals, len, false);
let co2 = calc_rating(vals, len, true);
(oxygen, co2, oxygen * co2)
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "00100
11110
10110
10111
10101
01111
00111
11100
10000
11001
00010
01010";
const EXP_VALUES: &'static [usize] = &[
0b00100, 0b11110, 0b10110, 0b10111, 0b10101, 0b01111, 0b00111, 0b11100, 0b10000, 0b11001,
0b00010, 0b01010,
];
const EXP_LEN: usize = 5;
#[test]
fn test_parse() {
let (vals, len) = parse(CONTENT);
assert_eq!(EXP_LEN, len);
assert_eq!(EXP_VALUES, vals);
}
#[test]
fn test_count_all_ones() {
assert_eq!(vec![5, 7, 8, 5, 7], count_all_ones(EXP_VALUES, EXP_LEN));
}
#[test]
fn test_calc_gamma_epsilon() {
assert_eq!((22, 9, 22 * 9), calc_gamma_epsilon(EXP_VALUES, EXP_LEN));
}
#[test]
fn test_filter() {
let values = filter(EXP_VALUES.to_vec(), EXP_LEN - 1, false);
assert_eq!(
vec![0b11110, 0b10110, 0b10111, 0b10101, 0b11100, 0b10000, 0b11001],
values
);
let values = filter(EXP_VALUES.to_vec(), EXP_LEN - 1, true);
assert_eq!(vec![0b00100, 0b01111, 0b00111, 0b00010, 0b01010], values);
}
#[test]
fn test_calc_rating() {
let (values, len) = parse(CONTENT);
assert_eq!(23, calc_rating(&values, len, false));
assert_eq!(10, calc_rating(&values, len, true));
}
#[test]
fn test_calc_ratings() {
let (values, len) = parse(CONTENT);
assert_eq!((23, 10, 230), calc_ratings(&values, len));
}
}
Rust solution to AoC|2021|04.
So let’s play Bingo (I don’t remember when I last did…)
After I solved part 1 today, there was very little to add for part 2. Hence, there is only one solution for both parts.
I created a struct
for a bingo board with a function new
to create new boards, apply_draws
to apply draws until a row or a column is complete, and a function get_score
to calculate the final score:
/// structure for a bingo board with ``data`` containing numbers, ``marked`` containing
/// flags which numbers have been drawn, ``draws_count`` holding the count af draws
/// (including draws that did not match), ``last`` holding the last number drawn, and
/// ``bingo`` is a flag indicating whether the board has at least one row or column completely
/// marked
pub struct Board {
data: Vec<usize>,
marked: Vec<bool>,
count: usize,
last: Option<usize>,
bingo: bool,
}
impl Board {
/// dimension of the boad
pub const N: usize = 5;
/// create new board from data vector
pub fn new(data: Vec<usize>) -> Self {
if data.len() != Self::N * Self::N {
panic!("Illegal data length: {}", data.len());
}
Board {
data,
marked: vec![false; Self::N * Self::N],
count: 0,
last: None,
bingo: false,
}
}
/// apply draws to board, stops applying draws if a row or column is entirely marked
pub fn apply_draws(&mut self, draws: &[usize]) {
if self.bingo {
return;
}
for draw in draws {
self.count += 1;
self.last = Some(*draw);
if let Some((idx, _)) = self.data.iter().enumerate().find(|(_, v)| *v == draw) {
self.marked[idx] = true;
let x0 = idx % Self::N;
let y0 = idx / Self::N;
if (0..5).all(|x| self.marked[x + Self::N * y0])
|| (0..5).all(|y| self.marked[x0 + Self::N * y])
{
self.bingo = true;
return;
}
}
}
}
/// get score (sum of numbers not marked multiplied by last number drawn)
pub fn get_score(&self) -> usize {
if !self.bingo {
panic!("Board not solved.");
}
self.data
.iter()
.zip(self.marked.iter())
.filter(|(_, marked)| !*marked)
.map(|(v, _)| v)
.sum::<usize>()
* self.last.expect("No draws")
}
}
Then, I created a function which parses the input to a vector of boards and a vector of numbers drawn
pub fn parse(input: &str) -> (Vec<Board>, Vec<usize>) {
let mut lines = input.lines();
// parse drawn numbers
let draws = lines
.next()
.expect("No draws")
.split(',')
.map(|part| part.parse().expect("Could not parse number"))
.collect();
// parse boards
let mut boards = Vec::new();
let mut data = Vec::with_capacity(Board::N * Board::N);
loop {
let (eof, line) = lines.next().map(|line| (false, line)).unwrap_or((true, ""));
if line.trim().len() == 0 && data.len() > 0 {
// block complete
boards.push(Board::new(data));
data = Vec::with_capacity(Board::N * Board::N);
} else {
// add data to block
line.split_whitespace()
.map(|v| v.parse().expect("Could not parse number"))
.for_each(|v| data.push(v));
}
if eof {
// no more lines
return (boards, draws);
}
}
}
Next, we are ready to play bingo. I play all the boards one by one and then look for the one which wins first and the one which wins last and return their scores
/// play bingo
///
/// returns scores of winning and loosing board
pub fn play<'a>(boards: &'a mut [Board], draws: &[usize]) -> (usize, usize) {
// boards that wins first and board that wins last with draw index
let mut winner: Option<&Board> = None;
let mut looser: Option<&Board> = None;
for board in boards {
board.apply_draws(&draws);
winner = winner
.filter(|winner| board.count >= winner.count)
.or(Some(board));
looser = looser
.filter(|looser| board.count <= looser.count)
.or(Some(board));
}
// unwrap results
(
winner.expect("No winner").get_score(),
looser.expect("No looser").get_score(),
)
}
I kind of did TDD ;)
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "7,4,9,5,11,17,23,2,0,14,21,24,10,16,13,6,15,25,12,22,18,20,8,19,3,26,1
22 13 17 11 0
8 2 23 4 24
21 9 14 16 7
6 10 3 18 5
1 12 20 15 19
3 15 0 2 22
9 18 13 17 5
19 8 7 25 23
20 11 10 24 4
14 21 16 12 6
14 21 17 24 4
10 16 15 9 19
18 8 23 26 20
22 11 13 6 5
2 0 12 3 7";
const EXP_DRAWS: &'static [usize] = &[
7, 4, 9, 5, 11, 17, 23, 2, 0, 14, 21, 24, 10, 16, 13, 6, 15, 25, 12, 22, 18, 20, 8, 19, 3,
26, 1,
];
const DATA_0: &'static [usize] = &[
22, 13, 17, 11, 0, 8, 2, 23, 4, 24, 21, 9, 14, 16, 7, 6, 10, 3, 18, 5, 1, 12, 20, 15, 19,
];
#[test]
fn test_board_new() {
let board = Board::new(DATA_0.to_owned());
assert_eq!(DATA_0, board.data);
assert_eq!(vec![false; Board::N * Board::N], board.marked);
assert_eq!(false, board.bingo);
assert_eq!(0, board.count);
assert_eq!(None, board.last);
}
#[test]
fn test_parse() {
let (boards, draws) = parse(CONTENT);
assert_eq!(EXP_DRAWS, draws);
assert_eq!(3, boards.len());
assert_eq!(DATA_0, boards[0].data);
}
#[test]
#[should_panic]
fn test_get_score_panics() {
let (boards, _) = parse(CONTENT);
// this fails since no bingo yet
boards[0].get_score();
}
#[test]
fn test_board_apply_draws() {
let (mut boards, draws) = parse(CONTENT);
// 15th number (24) wins for board at index 2
boards[1].apply_draws(&draws);
assert_eq!(15, boards[1].count);
assert_eq!(Some(13), boards[1].last);
assert!(boards[1].bingo);
// 12th number (13) wins for board at index 2
boards[2].apply_draws(&draws);
assert_eq!(12, boards[2].count);
assert_eq!(Some(24), boards[2].last);
assert!(boards[2].bingo);
}
#[test]
fn test_play() {
let (mut boards, draws) = parse(CONTENT);
assert_eq!((4512, 1924), play(&mut boards, &draws));
}
}
Rust solution to AoC|2021|05.
I played around a little bit how to best represent lines. Eventually, I decided to simply represent them as tuples (x1, y1, x2, y2)
/// parse lines ``"x1,y1 -> x2,y2"`` to tuples ``(x1, y1, x2, y2)``
pub fn parse(content: &str) -> Vec<(isize, isize, isize, isize)> {
content
.lines()
.map(|line| {
let mut parts = line.split(" -> ");
let mut xy1 = parts.next().expect("No first point").split(',');
let x1 = xy1
.next()
.expect("No x1")
.parse()
.expect("Could not parse x1");
let y1 = xy1
.next()
.expect("No y1")
.parse()
.expect("Could not parse y1");
let mut xy2 = parts.next().expect("NO second point").split(',');
let x2 = xy2
.next()
.expect("No x2")
.parse()
.expect("Could not parse x2");
let y2 = xy2
.next()
.expect("No y2")
.parse()
.expect("Could not parse y2");
(x1, y1, x2, y2)
})
.collect()
}
I initially decided to count vents in a flat list representing the relevant part of the ocean’s ground. To identify that relevant part, I have a function to determine the bounding box of all lines.
/// get bounding box over lines
///
/// returns ``(x_min, y_min, x_max, y_max)`` so that all points ``(x, y)`` on
/// any line satisfy ``x_min <= x < x_max`` and ``y_min <= y < y_max``
pub fn get_bbox(lines: &[(isize, isize, isize, isize)]) -> (isize, isize, isize, isize) {
lines.iter().fold(
(isize::MAX, isize::MAX, isize::MIN, isize::MIN),
|(x_min, y_min, x_max, y_max), (x1, y1, x2, y2)| {
(
cmp::min(cmp::min(x_min, *x1), *x2),
cmp::min(cmp::min(y_min, *y1), *y2),
cmp::max(cmp::max(x_max, *x1 + 1), *x2 + 1),
cmp::max(cmp::max(y_max, *y1 + 1), *y2 + 1),
)
},
)
}
Later on, I realized that a lot of other solutions used hash maps to keep the counters. So I wanted to try that as well and see the runtime impact. I made my solution generic using a trait VentsCount
and implementing it for HashMap
directly and for vectors using a struct VecVentsCount
with a bit of extra information on the bounding box.
/// to allow using the same code with counts stored in a vector or in a hash map, the interface is modeled as a trait
pub trait VentsCount {
/// increment count at given coordinate
fn increment(&mut self, coord: (isize, isize));
/// count number of coordinates with count > 1
fn count_dangerous(&self) -> usize;
}
impl VentsCount for HashMap<(isize, isize), usize> {
fn increment(&mut self, coord: (isize, isize)) {
// get or insert entry and increment
*self.entry(coord).or_insert(0) += 1;
}
fn count_dangerous(&self) -> usize {
self.values().filter(|&count| *count > 1).count()
}
}
/// structure to count vents based on a vector
pub struct VecVentsCount {
counts: Vec<usize>,
width: isize,
x_min: isize,
y_min: isize,
}
impl VecVentsCount {
/// create new ``VecVentsCount`` for bounding box
pub fn new((x_min, y_min, x_max, y_max): (isize, isize, isize, isize)) -> Self {
let width = x_max - x_min;
let counts = vec![0usize; (width * (y_max - y_min)) as usize];
VecVentsCount {
counts,
width,
x_min,
y_min,
}
}
}
impl VentsCount for VecVentsCount {
fn increment(&mut self, (x, y): (isize, isize)) {
self.counts[(x - self.x_min + self.width * (y - self.y_min)) as usize] += 1;
}
fn count_dangerous(&self) -> usize {
self.counts.iter().filter(|count| **count > 1).count()
}
}
The solution is then build by iterating over all lines and for each line iterating over all points it contains and increment a counter for the coordinate of that point. Then count all coordinates which belong to more than one line (have counter value greater than 1).
To iterate over the points, I calculate deltas dx
and dy
that define the difference in the x and y coordinate from one point to the next. They can be one of 0, 1, or -1.
/// this function uses the struct [VecVentsCount] by default. This behavior can be changed to using ``HashMap`` based
/// counting using the feature ``hash_counters``
pub fn count_overlaps(lines: &[(isize, isize, isize, isize)], incl_diagonal: bool) -> usize {
let mut counts: Box<dyn VentsCount> = if cfg!(feature = "hash_counters") {
Box::new(HashMap::new())
} else {
Box::new(VecVentsCount::new(get_bbox(lines)))
};
for (x1, y1, x2, y2) in lines {
if incl_diagonal || x1 == x2 || y1 == y2 {
let dx = match x1.cmp(&x2) {
Ordering::Equal => 0,
Ordering::Greater => -1,
Ordering::Less => 1,
};
let dy = match y1.cmp(&y2) {
Ordering::Equal => 0,
Ordering::Greater => -1,
Ordering::Less => 1,
};
let len = cmp::max((x2 - x1) * dx, (y2 - y1) * dy) + 1;
for k in 0..len {
counts.increment((x1 + k * dx, y1 + k * dy));
}
}
}
counts.count_dangerous()
}
Whether or not to include diagonal lines is controlled with a flag
which is set to include_diagonal
for part 1 and false
for part 2.true
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "0,9 -> 5,9
8,0 -> 0,8
9,4 -> 3,4
2,2 -> 2,1
7,0 -> 7,4
6,4 -> 2,0
0,9 -> 2,9
3,4 -> 1,4
0,0 -> 8,8
5,5 -> 8,2";
const LINES: &'static [(isize, isize, isize, isize)] = &[
(0, 9, 5, 9),
(8, 0, 0, 8),
(9, 4, 3, 4),
(2, 2, 2, 1),
(7, 0, 7, 4),
(6, 4, 2, 0),
(0, 9, 2, 9),
(3, 4, 1, 4),
(0, 0, 8, 8),
(5, 5, 8, 2),
];
#[test]
fn test_parse() {
assert_eq!(LINES, parse(CONTENT));
}
#[test]
fn test_get_bbox() {
assert_eq!((0, 0, 10, 10), get_bbox(LINES));
}
#[test]
fn test_count_overlaps_straight() {
assert_eq!(5, count_overlaps(LINES, false));
}
#[test]
fn test_count_overlaps() {
assert_eq!(12, count_overlaps(LINES, true));
}
}
A range is empty, if the upper bound is less than the lower bound.
It is not easy to write functions in rust that return iterators, since the actual type of the iterator depends on how it is constructed. Implementing an iterator on your own is not always worth the effort.
Sometimes, it is a good idea to use signed types even if all the results are positive. This allows to deal with negative increments much more directly.
Hash map based counting may come with a runtime penalty. In today’s puzzle, vector based counting performs three to four times faster.
Rust solution to AoC|2021|06.
The key is to not keep a list of timer vealues for each fish but a list of fish counts for each timer values.
Parsing the input is done as follows
/// parse fishes
///
/// returns an vector of the number of fishes per given timer value
///
/// ```
/// # use mr_kaffee_2021_06::*;
/// assert_eq!(vec![0, 1, 1, 2, 1, 0, 0, 0, 0], parse("3,4,3,1,2"));
/// ```
pub fn parse(content: &str) -> Vec<u64> {
content
.trim()
.split(',')
.map(|age| age.parse::<usize>().expect("Could not parse fish"))
.fold(vec![0; 9], |mut fishes, age| {
fishes[age] += 1;
fishes
})
}
Simulations are done with:
/// simulate a given number of days
///
/// the timer values of all fishes are decreased by one. If the timer value is 0,
/// it is reset to 6 and the same amount of fishes with time value 8 is added.
pub fn simulate(mut fishes: Vec<u64>, days: usize) -> Vec<u64> {
for _ in 0..days {
let new_fishes = fishes[0];
for k in 1..fishes.len() {
fishes[k - 1] = fishes[k];
}
fishes[6] += new_fishes;
fishes[8] = new_fishes;
}
fishes
}
/// simulate given number of rounds and return sum
pub fn simulate_and_count(mut fishes: Vec<u64>, days: usize) -> u64 {
fishes = simulate(fishes, days);
fishes.iter().sum()
}
#[cfg(test)]
mod tests {
use super::*;
const DATA: &'static [u64] = &[0, 1, 1, 2, 1, 0, 0, 0, 0];
#[test]
fn test_simulate() {
let mut fishes = DATA.to_vec();
fishes = simulate(fishes, 1);
assert_eq!(fishes, parse("2,3,2,0,1"));
fishes = simulate(fishes, 1);
assert_eq!(fishes, parse("1,2,1,6,0,8"));
fishes = simulate(fishes, 1);
assert_eq!(fishes, parse("0,1,0,5,6,7,8"));
}
#[test]
fn test_simulate_and_count() {
assert_eq!(5934, simulate_and_count(DATA.to_vec(), 80));
}
}
I was thinking to create an explicit solution (in memory of my time at University Stuttgart). Fish growth is governed by the discrte time ODE
timers[k + 1] = A timers[k] count[k] = C timers[k]
with the matrices
| 0 1 0 0 0 0 0 0 0 | | 0 0 1 0 0 0 0 0 0 | | 0 0 0 1 0 0 0 0 0 | | 0 0 0 0 1 0 0 0 0 | A = | 0 0 0 0 0 1 0 0 0 | | 0 0 0 0 0 0 1 0 0 | | 1 0 0 0 0 0 0 1 0 | | 0 0 0 0 0 0 0 0 1 | | 1 0 0 0 0 0 0 0 0 |
C = ( 1 1 1 1 1 1 1 1 1 )
The characteristic equation of A is
z^9 - z^2 - 1 = 0
which Maxima is not able to solve :( — so no explicit solution that I would be able to come up with
For a big number of rounds, it might be interesting to calculate powers of the above matrix using exponentiation by squaring, but for 256 rounds, a direct approach is certainly more efficient.
I might not have a direct soluton for an arbitrary number of rounds, but a direct solution for 80 and 256 rounds can still be given as
count[80] = ( 1421 1401 1191 1154 1034 950 905 779 768 ) * timers[0] count[256] = ( 6703087164 6206821033 5617089148 5217223242 4726100874 4368232009 3989468462 3649885552 3369186778 ) * timers[0]
const CA_80: &'static [u64] = &[1421, 1401, 1191, 1154, 1034, 950, 905, 779, 768];
const CA_256: &'static [u64] = &[
6703087164, 6206821033, 5617089148, 5217223242, 4726100874, 4368232009, 3989468462, 3649885552,
3369186778,
];
pub fn solve_direct(fishes: &[u64], days: usize) -> u64 {
fishes
.iter()
.zip(
match days {
80 => CA_80,
256 => CA_256,
_ => panic!("No direct solution for {} days", days),
}
.iter(),
)
.map(|(counter, factor)| counter * factor)
.sum()
}
#[cfg(test)]
mod direct_solution_test {
use super::*;
const INPUT: &str = "4,1,1,4,1,2,1,4,1,3,4,4,1,5,5,1,3,1,1,1,4,4,3,1,5,3,1,2,5,1,1,5,\
1,1,4,1,1,1,1,2,1,5,3,4,4,1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,5,1,1,1,4,1,2,3,5,1,2,2,4,1,4,4,4,\
1,2,5,1,2,1,1,1,1,1,1,4,1,1,4,3,4,2,1,3,1,1,1,3,5,5,4,3,4,1,5,1,1,1,2,2,1,3,1,2,4,1,1,3,3,\
1,3,3,1,1,3,1,5,1,1,3,1,1,1,5,4,1,1,1,1,4,1,1,3,5,4,3,1,1,5,4,1,1,2,5,4,2,1,4,1,1,1,1,3,1,\
1,1,1,4,1,1,1,1,2,4,1,1,1,1,3,1,1,5,1,1,1,1,1,1,4,2,1,3,1,1,1,2,4,2,3,1,4,1,2,1,4,2,1,4,4,\
1,5,1,1,4,4,1,2,2,1,1,1,1,1,1,1,1,1,1,1,4,5,4,1,3,1,3,1,1,1,5,3,5,5,2,2,1,4,1,4,2,1,4,1,2,\
1,1,2,1,1,5,4,2,1,1,1,2,4,1,1,1,1,2,1,1,5,1,1,2,2,5,1,1,1,1,1,2,4,2,3,1,2,1,5,4,5,1,4";
#[test]
#[should_panic]
fn test_direct_fails() {
solve_direct(&parse(INPUT), 56);
}
#[test]
fn test_direct_80() {
let fishes = parse(INPUT);
let sol_direct = solve_direct(&fishes, 80);
let sol_classic = simulate_and_count(fishes, 80);
assert_eq!(sol_classic, sol_direct);
}
#[test]
fn test_direct_256() {
let fishes = parse(INPUT);
let sol_direct = solve_direct(&fishes, 256);
let sol_classic = simulate_and_count(fishes, 256);
assert_eq!(sol_classic, sol_direct);
}
}
Rust solution to AoC|2021|07.
Parse input into a vector of numbers
pub fn parse(content: &str) -> Vec<usize> {
content
.trim()
.split(',')
.map(|v| v.parse().expect("Could not parse"))
.collect()
}
Create a function that calculates the fuel need for a given alignment position (after refactoring for part 2, this contains a cost function — mapping distance to cost — as argument; for part 1, this is just the identity)
pub fn get_fuel(crabs: &[usize], position: usize, cost: fn(usize) -> usize) -> usize {
crabs
.iter()
.map(|crab| {
cost(if crab > &position {
crab - position
} else {
position - crab
})
})
.sum()
}
I did not even think about anything more clever than linear search to solve the optimization problem.
/// linear search optimization
pub fn get_optimal_positions_fuel(crabs: &[usize], cost: fn(usize) -> usize) -> usize {
let pos_min = *crabs.iter().min().expect("No min");
let pos_max = *crabs.iter().max().expect("No max");
(pos_min..=pos_max)
.map(|position| get_fuel(crabs, position, cost))
.min()
.expect("No min")
}
For part two I just use a different cost function: COST_LINEAR
solves part 1, COST_INCREASING
solves part 2:
/// cost function for part 1: identity
pub const COST_LINEAR: fn(usize) -> usize = |distance| distance;
/// cost function for part 2: Euler formula for the sum 1 + ... + distance
pub const COST_INCREASING: fn(usize) -> usize = |distance| distance * (distance + 1) / 2;
I learned about something more clever than linear search from a colleague:
The solution to part 1 is the solution to the optimization problem min_p sum_i |p - pos[i]|
. Since the cost function to minimize is V-shaped, the optimization is solved by the median.
The solution to part 2 is the solution to the optimization problem min_p sum_i |p - pos[i]| (|p - pos[i]| + 1) / 2
. This is the same as min_p (sum_i (p - pos[i])^2 / 2 + sum_i |p - pos[i]| / 2)
.
The solution to min_p (sum_i (p - pos[i])^2 / 2
is given by the mean of the p[i]
values, so I start from the mean and look in both directions to potentially find something better.
You can run the direct solution using cargo run --release --features direct_solution
. Quite a bit faster than linear search ;)
/// solution for part 1
///
/// default: linear search, see [get_optimal_positions_fuel] and [COST_LINEAR]
///
/// if feature ``direct_solution`` is selected, calculate median as solution, see [select]
pub fn solution_1(crabs: &[usize]) -> usize {
if cfg!(feature = "direct_solution") {
// calculate median and comput cost for median
get_fuel(crabs, select(&crabs, crabs.len() / 2), COST_LINEAR)
} else {
// linear search
get_optimal_positions_fuel(crabs, COST_LINEAR)
}
}
/// solution for part 1
///
/// default: linear search, see [get_optimal_positions_fuel] and [COST_INCREASING]
///
/// if feature ``direct_solution`` is selected, calculate mean and search for solution in neighborhood
pub fn solution_2(crabs: &[usize]) -> usize {
if cfg!(feature = "direct_solution") {
// calculate mean and cost for mean
let mean = (crabs.iter().sum::<usize>() + crabs.len() - 1) / crabs.len();
let mut cost = get_fuel(crabs, mean, COST_INCREASING);
// search for improvements for increasing positions
let mut delta = 1;
loop {
let cost_upd = get_fuel(crabs, mean + delta, COST_INCREASING);
if cost_upd >= cost {
break;
}
cost = cost_upd;
delta += 1;
}
// if no improvment found, search for improvements for decreasing positions
if delta == 1 {
loop {
let cost_upd = get_fuel(crabs, mean - delta, COST_INCREASING);
if cost_upd >= cost {
break;
}
cost = cost_upd;
delta += 1;
}
}
cost
} else {
// linear search
get_optimal_positions_fuel(crabs, COST_INCREASING)
}
}
The median is calculated recursively:
/// Recursive calculation of k-th element.
///
/// # Examples
///
/// ```
/// # use mr_kaffee_2021_07::*;
/// let data = vec![1,4,7,12,8,13,101];
/// assert_eq!(8, select(&data, data.len() / 2));
/// ```
pub fn select(data: &[usize], k: usize) -> usize {
if k >= data.len() {
panic!(
"Cannot get select rank {} of list of size {}",
k,
data.len()
);
}
if data.len() == 1 {
return data[0];
}
let pivot = data[0];
let lows = data
.into_iter()
.filter(|d| d < &&pivot)
.map(|d| *d)
.collect::<Vec<_>>();
let pivots = data
.into_iter()
.filter(|d| d == &&pivot)
.map(|d| *d)
.collect::<Vec<_>>();
let highs = data
.into_iter()
.filter(|d| d > &&pivot)
.map(|d| *d)
.collect::<Vec<_>>();
return if k < lows.len() {
select(&lows, k)
} else if k < lows.len() + pivots.len() {
pivots[0]
} else {
select(&highs, k - lows.len() - pivots.len())
};
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "16,1,2,0,4,2,7,1,2,14";
const DATA: &'static [usize] = &[16, 1, 2, 0, 4, 2, 7, 1, 2, 14];
#[test]
fn test_parse() {
assert_eq!(DATA, parse(CONTENT));
}
#[test]
fn test_get_fuel() {
assert_eq!(37, get_fuel(DATA, 2, COST_LINEAR));
assert_eq!(41, get_fuel(DATA, 1, COST_LINEAR));
assert_eq!(39, get_fuel(DATA, 3, COST_LINEAR));
}
#[test]
fn test_solution_1() {
assert_eq!(37, solution_1(DATA));
}
#[test]
fn test_solution_2() {
assert_eq!(168, solution_2(DATA));
}
}
Rust solution to AoC|2021|08.
For part 1, the most difficult part for me was to parse the input. In the current version, I parse the unique patterns and outputs to integers, where bits 0
to 6
correspond to wires a
to g
/// return a vector of tuples, each containing a vector of unique patterns and a vector of outputs
///
/// each pattern / output is represented by an integer whos bits indicate the state of segments a (bit 0) to g (bit 6)
pub fn parse(content: &str) -> Vec<(Vec<usize>, Vec<usize>)> {
content
.lines()
.map(|line| {
line.split(" | ") // separate patterns from outputs
.map(|part| {
part.split_ascii_whitespace() // separate individual patterns
.map(|pattern| {
// convert pattern to usize
pattern
.chars()
.fold(0, |bits, c| bits | 1 << (c as usize - 'a' as usize))
})
.collect::<Vec<_>>() // patterns / outputs as usize
})
.collect::<VecDeque<_>>() // VecDeque with one element for patterns and one for outputs
})
.map(|mut parts| (parts.pop_front().unwrap(), parts.pop_front().unwrap()))
.collect()
}
Counting the unique outputs is simple
/// count unique ouput values (1 - 2 bits set, 7 - 3 bits set, 4 - 4 bits set, 8 - 7 bits set) in all outputs
pub fn solution_1(displays: &[(Vec<usize>, Vec<usize>)]) -> usize {
displays
.iter()
.map(|(_, outputs)| {
outputs
.iter()
.filter(|output| [2, 3, 4, 7].contains(&output.count_ones()))
.count()
})
.sum()
}
For part 2 I did not really have a good idea and tried around for a while. What I came up with is a map of wires to candidate segments. I iterate over all unique patterns and exclude candidate segments for each wire.
For patterns with 2, 3, or 4 wires, the 2, 3, or 4 corresponding segments are known.
For patterns with 5 or 6 wires, the unused wires correspond to one of 4 or 3 segments, respectively.
Once all patterns processed, the map is further reduced by removing segments which are the only candidate left for one wire from the candidates of all other wires.
That was enough in my case to end up with a unique mapping.
/// sum over decoded outputs
pub fn solution_2(displays: &[(Vec<usize>, Vec<usize>)]) -> usize {
let f = if cfg!(feature = "rjplog_solution") {
// alternative decoder if feature "rjplog_solution" is chosen
self::decode_alt
} else {
self::decode
};
displays
.iter()
.map(|(patterns, outputs)| f(patterns, outputs))
.sum()
}
/// determine wiring by unique batterns and decode output
///
/// # Example
/// ```
/// # use mr_kaffee_2021_08::*;
/// let content =
/// "acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf";
/// let displays = parse(content);
/// let (patterns, outputs) = &displays[0];
/// assert_eq!(5353, decode(patterns, outputs));
/// ```
pub fn decode(patterns: &[usize], outputs: &[usize]) -> usize {
// map is a vector of 7 integers, one for each segment
// each row represents a wire in a display (a = 0 to g = 6).
// Each bit in the entry represents a segment, the wire actually controls (a = 0 to g = 6)
let mut map: Vec<usize> = vec![(1 << 7) - 1; 7];
// 1 -> _ _ c _ _ f _ -> set bits match 0b0100100
// 7 -> a _ c _ _ f _ -> set bits match 0b0100101
// 4 -> _ b c d _ f _ -> set bits match 0b0101110
// 2 -> a _ c d e _ g
// 3 -> a _ c d _ f g
// 5 -> a b _ d _ f g -> unset bits match 0b0110110
// 0 -> a b c _ e f g
// 6 -> a b _ d e f g
// 9 -> a b c d _ f g -> unset bits match 0b0011100
let masks = HashMap::from([
(2, (0b0100100, !0b0100100)),
(3, (0b0100101, !0b0100101)),
(4, (0b0101110, !0b0101110)),
(5, (0b1111111, 0b0110110)),
(6, (0b1111111, 0b0011100)),
]);
// for each pattern, reduce map by impossible bits
for pattern in patterns {
if let Some((p, n)) = masks.get(&pattern.count_ones()) {
for wire in 0..7 {
map[wire] &= if (pattern >> wire) & 1 == 1 { p } else { n };
}
}
}
// reduce map
// if a row contains only one bit set, this bit is removed from every other row
for wire_1 in 0..7 {
if map[wire_1].count_ones() == 1 {
for wire_2 in (0..7).filter(|wire_2| *wire_2 != wire_1) {
map[wire_2] &= !map[wire_1];
}
}
}
// determine digits and fold to output
outputs
.iter()
.map(|output| {
// map wires to segments
(0..7)
.filter(|wire| (output >> wire) & 1 == 1)
.fold(0, |digit, wire| digit | map[wire])
})
.fold(0, |result, digit| {
// calculate output from digits
10 * result
+ match digit {
// map digit patterns to decimal digit
0b1110111 => 0,
0b0100100 => 1,
0b1011101 => 2,
0b1101101 => 3,
0b0101110 => 4,
0b1101011 => 5,
0b1111011 => 6,
0b0100101 => 7,
0b1111111 => 8,
0b1101111 => 9,
_ => panic!("Invalid digit: {}", digit),
}
})
}
I looked at the solution by RJPlog later, which is actually much simpler than mine. Here is a Rust version of his decode idea.
/// decode output (solution idea by https://github.com/RJPlog[RJPlog])
///
/// # Example
/// ```
/// # use mr_kaffee_2021_08::*;
/// let content =
/// "acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf";
/// let displays = parse(content);
/// let (patterns, outputs) = &displays[0];
/// assert_eq!(5353, decode(patterns, outputs));
/// ```
pub fn decode_alt(patterns: &[usize], outputs: &[usize]) -> usize {
let mut map: Vec<usize> = vec![0; 10];
for pattern in patterns {
// unique segment counts
// 1 -> _ _ c _ _ f _ 2 segments
// 7 -> a _ c _ _ f _ 3 segments
// 4 -> _ b c d _ f _ 4 segments
// 8 -> a b c d e f g 7 segments
match pattern.count_ones() {
2 => map[1] = *pattern,
3 => map[7] = *pattern,
4 => map[4] = *pattern,
7 => map[8] = *pattern,
_ => {} // nothing here
}
}
for pattern in patterns.iter().filter(|pattern| pattern.count_ones() == 5) {
// 2 -> a _ c d e _ g
// 3 -> a _ c d _ f g
// 5 -> a b _ d _ f g
if pattern & map[1] == map[1] {
// c | f is only contained in segments for 3
map[3] = *pattern;
} else if pattern & (map[4] & !map[1]) == (map[4] & !map[1]) {
// b | c | d | f & !(c | f) = b | d is only contained in segments for 5
map[5] = *pattern;
} else {
// if it is neither 3 nor 5, it must be 2
map[2] = *pattern;
}
}
for pattern in patterns.iter().filter(|pattern| pattern.count_ones() == 6) {
// 0 -> a b c _ e f g
// 6 -> a b _ d e f g
// 9 -> a b c d _ f g
if pattern & map[3] == map[3] {
// a | c | d | f | g is only contained in the segments for 9
map[9] = *pattern;
} else if pattern & map[5] == map[5] {
// a | b | d | f | g is contained in the segments for 9 (already handled) and 6
map[6] = *pattern;
} else {
// if it is neither 9 nor 6, it must be 0
map[0] = *pattern;
}
}
outputs
.iter()
.map(|output| map.iter().position(|pattern| output == pattern).unwrap())
.fold(0, |result, digit| 10 * result + digit)
}
Run the alternative solution with cargo run --release --features rjplog_solution
Not only is this solution simpler but it also runs faster. With compiler optimizations turned on, my original part 2 solves in about 1ms, the alternative solution takes about 0.1ms.
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str =
"be cfbegad cbdgef fgaecd cgeb fdcge agebfd fecdb fabcd edb | fdgacbe cefdb cefbgd gcbe
edbfga begcd cbg gc gcadebf fbgde acbgfd abcde gfcbed gfec | fcgedb cgb dgebacf gc
fgaebd cg bdaec gdafb agbcfd gdcbef bgcad gfac gcb cdgabef | cg cg fdcagb cbg
fbegcd cbd adcefb dageb afcb bc aefdc ecdab fgdeca fcdbega | efabcd cedba gadfec cb
aecbfdg fbg gf bafeg dbefa fcge gcbea fcaegb dgceab fcbdga | gecf egdcabf bgf bfgea
fgeab ca afcebg bdacfeg cfaedg gcfdb baec bfadeg bafgc acf | gebdcfa ecba ca fadegcb
dbcfg fgd bdegcaf fgec aegbdf ecdfab fbedc dacgb gdcebf gf | cefg dcbef fcge gbcadfe
bdfegc cbegaf gecbf dfcage bdacg ed bedf ced adcbefg gebcd | ed bcgafe cdgba cbgef
egadfb cdbfeg cegd fecab cgb gbdefca cg fgcdab egfdb bfceg | gbdfcae bgc cg cgb
gcafb gcf dcaebfg ecagb gf abcdeg gaef cafbge fdbac fegbdc | fgae cfgab fg bagce";
#[test]
fn test_solution_1() {
let displays = parse(CONTENT);
assert_eq!(26, solution_1(&displays));
}
#[test]
fn test_solution_2() {
let data = parse(CONTENT);
let sol = solution_2(&data);
assert_eq!(61229, sol);
}
}
Rust solution to AoC|2021|09.
I parse the input into a flat list and return the width of the grid in addition
/// get width of grid and numbers as flat vector
///
/// ```
/// # use mr_kaffee_2021_09::*;
/// assert_eq!((1, vec![0,1]), parse("0\n1"));
/// ```
pub fn parse(content: &str) -> (usize, Vec<usize>) {
let w = content.lines().next().expect("No lines").len();
let grid = content
.chars()
.filter(|c| c.is_ascii_digit())
.map(|c| (c as usize - '0' as usize))
.collect();
(w, grid)
}
Part one is about finding the low points. This is quite straightforward.
/// find all elements for which all adjacents have higher values, sum their values incremented by 1
pub fn solution_1(w: usize, grid: &[usize]) -> usize {
(0..grid.len())
.map(|idx| (idx % w, idx / w))
.filter(|(x, y)| {
(*x == 0 || grid[x - 1 + w * y] > grid[x + w * y])
&& (*x == w - 1 || grid[x + 1 + w * y] > grid[x + w * y])
&& (*y == 0 || grid[x + w * (y - 1)] > grid[x + w * y])
&& (*y == grid.len() / w - 1 || grid[x + w * (y + 1)] > grid[x + w * y])
})
.fold(0, |sum, (x, y)| sum + grid[x + w * y] + 1)
}
Finally a puzzle where a breadth-first-traversal could be used (there might be easier approaches…).
I did not read the instructions well and thought that adjacent coordinates only belong to a basin if their value was heigher than the neighbor’s value. My tests helped me to sort that out and understand that I only have to look for coordinates with value 9
that delimit the basins.
Actually, the problem was that in my test case I did not start from the low point (1, 0)
but a neighbor (0, 0)
. Looking only for numbers with higher values would work when starting from the low point, but it is not necessary. Only looking for nines is fine and allows to start from anywhere in the basin.
The depth first traversal is put in its own function get_basin_size
for readability ;)
/// get the basin sizes of the basins the given coordinates belong to with a breadth first traversal
pub fn get_basin_size(w: usize, grid: &[usize], x: usize, y: usize) -> usize {
let mut unknown = vec![true; grid.len()];
let mut queue = VecDeque::new();
unknown[x + w * y] = false;
queue.push_back((x, y));
let mut count = 0;
let h = grid.len() / w;
// breadth first traversal
while let Some((x, y)) = queue.pop_front() {
count += 1;
for (x_a, y_a) in [
(x.wrapping_sub(1), y),
(x + 1, y),
(x, y.wrapping_sub(1)),
(x, y + 1),
] {
if (x_a < w && y_a < h) && unknown[x_a + w * y_a] && grid[x_a + w * y_a] != 9 {
// if adjacent is within grid, is not yet seen and has a value != 9, mark known and add to queue
unknown[x_a + w * y_a] = false;
queue.push_back((x_a, y_a));
}
}
}
count
}
/// get the product of the sizes of the (at most) three biggest basins
pub fn solution_2(w: usize, grid: &[usize]) -> usize {
// determine low points and calculate basin sizes for those
let mut basins = (0..grid.len())
.map(|idx| (idx % w, idx / w))
.filter(|(x, y)| {
(*x == 0 || grid[x - 1 + w * y] > grid[x + w * y])
&& (*x == w - 1 || grid[x + 1 + w * y] > grid[x + w * y])
&& (*y == 0 || grid[x + w * (y - 1)] > grid[x + w * y])
&& (*y == grid.len() / w - 1 || grid[x + w * (y + 1)] > grid[x + w * y])
})
.map(|(x, y)| get_basin_size(w, grid, x, y))
.collect::<Vec<_>>();
// sort basin sizes
basins.sort_unstable();
// product over the last three elements
basins.iter().rev().take(3).product()
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "2199943210\n3987894921\n9856789892\n8767896789\n9899965678";
const W: usize = 10;
const GRID: &'static [usize] = &[
2, 1, 9, 9, 9, 4, 3, 2, 1, 0, 3, 9, 8, 7, 8, 9, 4, 9, 2, 1, 9, 8, 5, 6, 7, 8, 9, 8, 9, 2,
8, 7, 6, 7, 8, 9, 6, 7, 8, 9, 9, 8, 9, 9, 9, 6, 5, 6, 7, 8,
];
#[test]
fn test_parse() {
assert_eq!((W, GRID.to_vec()), parse(CONTENT));
}
#[test]
fn test_solution_1() {
assert_eq!(15, solution_1(W, GRID));
}
#[test]
fn test_get_basin_size() {
for (exp, (x, y)) in [(3, (1, 0)), (9, (9, 0)), (14, (2, 2)), (9, (6, 4))] {
assert_eq!(exp, get_basin_size(W, GRID, x, y));
}
}
#[test]
fn test_solution_2() {
assert_eq!(1_134, solution_2(W, GRID));
}
}
Read the puzzle description carefully (ha, not really a new learning) and continue to test (I think we had that before).
Rust solution to AoC|2021|10.
Solution idea is to parse the characters one by one. If they are open delimiters, push the corresponding closing delimiter to a stack. If they are closing delimites, pop values from the stack and compare them to the current character.
If a comparison fails, an illegal character is found and its score is returned. Then the sum over the scores is taken.
/// return illegal char's score if any in as ``Some`` value, otherwise ``None``
pub fn get_illegal_score(line: &str) -> Option<u64> {
let mut stack = Vec::new();
for c in line.chars() {
match c {
'(' => stack.push(')'),
'[' => stack.push(']'),
'{' => stack.push('}'),
'<' => stack.push('>'),
')' if c != stack.pop().unwrap() => return Some(3),
']' if c != stack.pop().unwrap() => return Some(57),
'}' if c != stack.pop().unwrap() => return Some(1_197),
'>' if c != stack.pop().unwrap() => return Some(25_137),
_ => {} // nothing here
}
}
None
}
/// Calculate sum of scores of illegal chars
pub fn solution_1(lines: &[&str]) -> u64 {
lines
.iter()
.filter_map(|line| get_illegal_score(line))
.sum()
}
Same idea as for part 1. Now discard lines where comparisons fail. What remains on the stack, once a line is processed, are the characters required to repair the line.
/// get the repair score if the line is incomplete as a ``Some`` value, otherwise return ``None``
pub fn get_repair_score(line: &str) -> Option<u64> {
let mut stack = Vec::new();
for c in line.chars() {
match c {
'(' => stack.push(')'),
'[' => stack.push(']'),
'{' => stack.push('}'),
'<' => stack.push('>'),
')' | '>' | '}' | ']' if c != stack.pop().unwrap() => return None,
_ => {} // nothing here
}
}
Some(stack.iter().rev().fold(0, |score, c| {
score * 5
+ match c {
')' => 1,
']' => 2,
'}' => 3,
'>' => 4,
_ => unreachable!(),
}
}))
}
/// get the middle repair score
pub fn solution_2(lines: &[&str]) -> u64 {
let mut scores = lines
.iter()
.filter_map(|line| get_repair_score(line))
.collect::<Vec<_>>();
scores.sort_unstable();
scores[scores.len() / 2]
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "[({(<(())[]>[[{[]{<()<>>
[(()[<>])]({[<{<<[]>>(
{([(<{}[<>[]}>{[]{[(<()>
(((({<>}<{<{<>}{[]{[]{}
[[<[([]))<([[{}[[()]]]
[{[{({}]{}}([{[{{{}}([]
{<[[]]>}<{[{[{[]{()[[[]
[<(<(<(<{}))><([]([]()
<{([([[(<>()){}]>(<<{{
<{([{{}}[<[[[<>{}]]]>[]]";
#[test]
fn test_find_illegal_char() {
let lines = parse(CONTENT);
assert_eq!(
vec![
None,
None,
Some(1197),
None,
Some(3),
Some(57),
None,
Some(3),
Some(25137),
None
],
lines
.iter()
.map(|line| get_illegal_score(line))
.collect::<Vec<_>>()
);
}
#[test]
fn test_solution_1() {
let lines = parse(CONTENT);
assert_eq!(26_397, solution_1(&lines));
}
#[test]
fn test_get_repair_score() {
let lines = parse(CONTENT);
assert_eq!(
vec![
Some(288957),
Some(5566),
None,
Some(1480781),
None,
None,
Some(995444),
None,
None,
Some(294)
],
lines
.iter()
.map(|line| get_repair_score(line))
.collect::<Vec<_>>()
);
}
#[test]
fn test_solution_2() {
let lines = parse(CONTENT);
assert_eq!(288_957, solution_2(&lines));
}
}
Rust solution to AoC|2021|11.
Parse the input into a flat list of integers
/// parse energy levels
///
/// # Panics
/// if number of energy levels parsed is not [N]^2
pub fn parse(content: &str) -> Vec<usize> {
let energies = content
.chars()
.filter(char::is_ascii_digit)
.map(|c| c as usize - '0' as usize)
.collect::<Vec<_>>();
assert_eq!(N * N, energies.len(), "Bad length");
energies
}
In each update step, first increment all energy levels and reset them to 0 if they exceed 9. All indices for reset entries (octopus flashed) are added to a stack.
Then do a depth first traversal (breadth first would work just as well):
pop elements off the stack while it is not empty
loop through all adjacents skipping adjecents which have already flashed (energy level already reset to 0)
increment adjacent’s energy level
if energy level exceeds 9, reset and add index to stack
/// do an update step on the energy levels
///
/// return the count of flashes in that step
pub fn step(energies: &mut [usize]) -> usize {
// flashing stack
let mut stack = Vec::new();
// increase all elements by one
for k in 0..energies.len() {
energies[k] += 1;
if energies[k] > FLASH_THRESHOLD {
// flashed -> reset, add index to stack
energies[k] = 0;
stack.push((k % N, k / N));
}
}
// depth first traversal to determine all flashes
let mut flash_count = 0;
while let Some((x, y)) = stack.pop() {
flash_count += 1;
// loop over adjacent entries
for (x_a, y_a) in [
(x + 1, y),
(x + 1, y + 1),
(x, y + 1),
(x.wrapping_sub(1), y + 1),
(x.wrapping_sub(1), y),
(x.wrapping_sub(1), y.wrapping_sub(1)),
(x, y.wrapping_sub(1)),
(x + 1, y.wrapping_sub(1)),
] {
if x_a >= N || y_a >= N {
// out of bounds
continue;
}
// flat index
if energies[x_a + N * y_a] == 0 {
// already flashed
continue;
}
// not flashed yet, increment
energies[x_a + N * y_a] += 1;
if energies[x_a + N * y_a] > FLASH_THRESHOLD {
// flashed -> reset, add index to stack
energies[x_a + N * y_a] = 0;
stack.push((x_a, y_a));
}
}
}
flash_count
}
pub const ROUNDS: usize = 100;
/// perform [ROUNDS] update steps and return total count of flashes
pub fn solution_1(energies: &[usize]) -> usize {
// work on my own copy of the grid
let mut energies = energies.to_owned();
(0..ROUNDS).map(|_| step(&mut energies)).sum()
}
Perform update steps until all 100 octopuses flash at the same time. Could have re-used the updated grid from part 1, but the solution is fast enough to re-do the first 100 steps ;)
/// perform update steps until all octopuses flash at the same time
///
/// return the first step when this occurs.
pub fn solution_2(energies: &[usize]) -> usize {
// work on my own copy of the grid
let mut energies = energies.to_owned();
let mut rounds = 1; // one based counting
while step(&mut energies) < N * N {
rounds += 1;
}
rounds
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "5483143223
2745854711
5264556173
6141336146
6357385478
4167524645
2176841721
6882881134
4846848554
5283751526";
const ENERGIES: &'static [usize] = &[
5, 4, 8, 3, 1, 4, 3, 2, 2, 3, 2, 7, 4, 5, 8, 5, 4, 7, 1, 1, 5, 2, 6, 4, 5, 5, 6, 1, 7, 3,
6, 1, 4, 1, 3, 3, 6, 1, 4, 6, 6, 3, 5, 7, 3, 8, 5, 4, 7, 8, 4, 1, 6, 7, 5, 2, 4, 6, 4, 5,
2, 1, 7, 6, 8, 4, 1, 7, 2, 1, 6, 8, 8, 2, 8, 8, 1, 1, 3, 4, 4, 8, 4, 6, 8, 4, 8, 5, 5, 4,
5, 2, 8, 3, 7, 5, 1, 5, 2, 6,
];
const CONTENT_1: &str = "6594254334
3856965822
6375667284
7252447257
7468496589
5278635756
3287952832
7993992245
5957959665
6394862637";
const CONTENT_2: &str = "8807476555
5089087054
8597889608
8485769600
8700908800
6600088989
6800005943
0000007456
9000000876
8700006848";
const CONTENT_10: &str = "0481112976
0031112009
0041112504
0081111406
0099111306
0093511233
0442361130
5532252350
0532250600
0032240000";
#[test]
fn test_parse() {
assert_eq!(ENERGIES, parse(CONTENT));
}
#[test]
fn test_step() {
let mut energies = parse(CONTENT);
let energies_1 = parse(CONTENT_1);
let energies_2 = parse(CONTENT_2);
let energies_10 = parse(CONTENT_10);
let mut flash_count = 0;
// one step
flash_count += step(&mut energies);
assert_eq!(0, flash_count);
assert_eq!(energies_1, energies);
// another step
flash_count += step(&mut energies);
assert_eq!(35, flash_count);
assert_eq!(energies_2, energies);
// 8 more steps
for _ in 2..10 {
flash_count += step(&mut energies);
}
assert_eq!(204, flash_count);
assert_eq!(energies_10, energies);
}
#[test]
fn test_solution_1() {
assert_eq!(1656, solution_1(&parse(CONTENT)));
}
#[test]
fn test_solution_2() {
assert_eq!(195, solution_2(&parse(CONTENT)));
}
}
In depth first traversal (as well as in breadth first traversal), be careful to not add elements to the stack (or queue) multiple times. In my first approach, I only reset the energy levels when they were popped from the queue. Thus entries could get modified while their indices were in the queue, and indices could be added multiple times. Took a while to sort that out.
Rust solution to AoC|2021|12.
Again a kind of breadth first traversal and the first day in the 2021 edition where my solution takes significant time (more than a couple of ms) to compute. After some optimizations, I am now down to about 40ms for part 2.
To start, I parse the input into a struct CaveMap
(see comments in code for details):
/// Map of Caves
///
/// Each cave has a unique ID: the position of it's name in the [CaveMap::caves] field.
/// The adjacents of the cave are stored in the list at the same position in the [CaveMap::adjacents] field.
///
/// The IDs of the start and end cave are stored in the fields [CaveMap::start] and [CaveMap::end].
pub struct CaveMap<'a> {
pub caves: Vec<&'a str>,
pub adjacents: Vec<Vec<usize>>,
pub start: usize,
pub end: usize,
}
impl<'a> CaveMap<'a> {
/// parse input to map
pub fn parse(input: &'a str) -> Self {
let mut caves = Vec::new();
let mut adjacents = Vec::new();
let mut start = 0;
let mut end = 0;
for line in input.lines() {
let mut parts = line.split('-').map(|part| {
caves
.iter()
.position(|cave| *cave == part)
.unwrap_or_else(|| {
caves.push(part);
adjacents.push(Vec::new());
// keep ID to start or end
match part {
"start" => start = caves.len() - 1,
"end" => end = caves.len() - 1,
_ => {}
}
caves.len() - 1
})
});
let lhs_id = parts.next().expect("No LHS");
let rhs_id = parts.next().expect("No RHS");
assert!(parts.next().is_none(), "More than two parts");
adjacents[lhs_id].push(rhs_id);
adjacents[rhs_id].push(lhs_id);
}
CaveMap {
caves,
adjacents,
start,
end,
}
}
/// check whether cave with given ID is small
pub fn is_small(&self, id: usize) -> bool {
self.caves[id].chars().next().unwrap().is_ascii_lowercase()
}
}
In my initial implementation, I used a BTreeMap
to map cave names to adjacent names. In the current version, I use cave IDs instead, which are the addresses to the fields in the CaveMap structure. This reduced the runtime by about 25%.
The general soluton idea is to do a (breadth first) traversal on a graph of all possible paths through the cave (graph nodes are paths, not caves):
Push a path consisting of the start
cave only to the queue
Pop a path from the queue, and look at all adjacent caves of the last cave on the path
if the adjacent cave is start
, ignore it
if the adjacent cave is end
, the path is complete; increase the unqiue path counter
if the adjacent cave is a large cave, append it to the path and push the extended path to the queue
if the adjacent cave is a small cave, append it to the path and push the extended path to the queue if
the small cave is not yet on the path (part 1 & 2) or
the small cave is already on the path and no small cave is on the path twice yet (part 2 only)
Repeat step 2 until queue is empty
/// count number of distinct pathes using map
///
/// the flag ``no_duplicate_small`` controls whether small caves may be visited more than
/// once. If the flag is set to ``true``, this is not allowed at all. If it is set to false,
/// it is allowed at most once.
pub fn get_path_count(map: &CaveMap, no_duplicate_small: bool) -> usize {
// store all path elements as a cave ID and a parent pointer (= index to this vec) wrapped in an Option
let mut paths: Vec<(usize, Option<usize>)> = Vec::new();
// count of unique paths
let mut path_count = 0;
let mut queue = VecDeque::new();
paths.push((map.start, None));
queue.push_back((paths.len() - 1, no_duplicate_small));
while let Some((ptr, no_duplicate_small)) = queue.pop_front() {
let cave_id = paths[ptr].0;
for &id in &map.adjacents[cave_id] {
if id == map.start {
// never go back to start
continue;
} else if id == map.end {
// new path to "end" found
path_count += 1;
continue;
}
let duplicate_small = map.is_small(id) && contains(&paths, ptr, id);
if no_duplicate_small && duplicate_small {
// skip small cave already on path, if no duplicate small caves are allowed
continue;
}
// push new path element to paths vec
paths.push((id, Some(ptr)));
// add pointer to path element to queue
// if a duplicate small was added, no further duplicate smalls are allowed
queue.push_back((paths.len() - 1, no_duplicate_small || duplicate_small));
}
}
path_count
}
Initially, I used Vec
s to represent paths. To extend a path by another cave, I copied the complete Vec
, added the cave’s name and pushed the Vec
to the queue. In the current version, I use kind of a linked list (each path element has a reference to the parent element). Since linked lists are something that does not go very well with Rust’s ownership and borrowing system (or maybe I just don’t know how to do that properly), I keep a Vec
of all path elements (for part 2, a total of 427.805 path elements are created). Each path element is a tuple of a cave ID and and a parent pointer, which points to the parent path element in that Vec
. Because the start of a path has no parent, the parent pointer is wrapped in an Option
. This optimization reduced the runtime by about 75%.
The check whether a cave ID is contained in a path is implemented as follows:
/// determine whether a path contains a cave
fn contains(paths: &[(usize, Option<usize>)], ptr: usize, id: usize) -> bool {
let mut ptr = ptr;
loop {
let (cave_id, opt_ptr) = paths[ptr];
if cave_id == id {
return true;
}
if let Some(next_ptr) = opt_ptr {
ptr = next_ptr;
} else {
return false;
}
}
}
#[cfg(test)]
mod tests {
use std::collections::{BTreeMap, BTreeSet};
use super::*;
const TEST_INPUT: &str = "start-A
start-b
A-c
A-b
b-d
A-end
b-end";
fn get_test_data() -> BTreeMap<&'static str, BTreeSet<&'static str>> {
BTreeMap::from([
("start", BTreeSet::from(["A", "b"])),
("A", BTreeSet::from(["start", "c", "b", "end"])),
("b", BTreeSet::from(["start", "A", "d", "end"])),
("c", BTreeSet::from(["A"])),
("d", BTreeSet::from(["b"])),
("end", BTreeSet::from(["A", "b"])),
])
}
fn get_adjacents<'a>(map: &'a CaveMap, name: &str) -> Option<impl Iterator<Item = &'a str>> {
map.caves
.iter()
.position(|cave| *cave == name)
.map(|idx| map.adjacents[idx].iter().map(|adj| map.caves[*adj]))
}
#[test]
fn test_parse() {
let map = parse(TEST_INPUT);
for (name, adjacents) in get_test_data() {
assert_eq!(
Some(adjacents),
get_adjacents(&map, name).map(|it| it.collect())
);
}
}
#[test]
fn test_solution_1() {
assert_eq!(10, solution_1(&parse(&TEST_INPUT)))
}
#[test]
fn test_solution_2() {
assert_eq!(36, solution_2(&parse(&TEST_INPUT)))
}
}
Rust solution to AoC|2021|13.
I parse the points in a set (that does the duplicates handling for me) and the fold instructions in a list of tuples with a boolean flag and a fold coordinate. The boolean flag is set to true for horizontal folds and false for vertical folds
pub fn parse(content: &str) -> (HashSet<(usize, usize)>, Vec<(bool, usize)>) {
let mut parts = content.split("\n\n");
(
parts
.next()
.expect("No points")
.lines()
.map(|line| line.split(","))
.map(|mut point_parts| {
(
point_parts
.next()
.expect("No x coordinate")
.parse()
.expect("Could not parse x"),
point_parts
.next()
.expect("No y coordinate")
.parse()
.expect("Could not parse y"),
)
})
.collect(),
parts
.next()
.expect("No fold instructions")
.lines()
.map(|line| line.split("="))
.map(|mut fold_parts| {
(
match fold_parts.next() {
Some("fold along y") => true,
Some("fold along x") => false,
_ => panic!("Unexpected fold instruction"),
},
fold_parts
.next()
.expect("No fold coordinate")
.parse()
.expect("Could not parse fold coordinate"),
)
})
.collect(),
)
}
The fold function iterates over all points, checks whether they are below / to the right of the fold coordinate and maps them above / to the left. Then everything is collected to a set again (removing duplicates)
/// perform a single fold
pub fn fold(
points: &HashSet<(usize, usize)>,
horizontal: bool,
coord: usize,
) -> HashSet<(usize, usize)> {
points
.iter()
.map(|(x, y)| {
if horizontal && y > &coord {
(*x, 2 * coord - *y)
} else if !horizontal && x > &coord {
(2 * coord - *x, *y)
} else {
(*x, *y)
}
})
.collect()
}
This is used for part 1 as follows:
/// count points after first fold operation
pub fn solution_1(points: &HashSet<(usize, usize)>, folds: &[(bool, usize)]) -> usize {
let (horizontal, fold_coordinate) = folds.first().expect("No folds");
fold(points, *horizontal, *fold_coordinate).len()
}
Part 2 adds calculating a bounding box of the result and putting everything together in a string:
pub fn points_to_string(points: &HashSet<(usize, usize)>) -> String {
// calculate bounding box
let (x_min, y_min, x_max, y_max) = points.iter().fold(
(usize::MAX, usize::MAX, 0, 0),
|(x_min, y_min, x_max, y_max), (x, y)| {
(
cmp::min(x_min, *x),
cmp::min(y_min, *y),
cmp::max(x_max, *x + 1),
cmp::max(y_max, *y + 1),
)
},
);
// assemble result string
(y_min..y_max)
.map(|y| {
(x_min..x_max)
.map(|x| if points.contains(&(x, y)) { '#' } else { '.' })
.collect::<String>()
})
.map(|line| line.add("\n"))
.collect::<String>()
}
/// perform all fold operations and return result as a ``String``
pub fn solution_2(points: &HashSet<(usize, usize)>, folds: &[(bool, usize)]) -> String {
let mut points = points.to_owned();
for (horizontal, fold_coordinate) in folds {
points = fold(&points, *horizontal, *fold_coordinate);
}
points_to_string(&points)
}
Solving part 2 took much longer than it should for me, because I messed up with the bounding box. Tried to find my mistake in the calculations to figure out that I just took y_min
for x_max
and vice versa resulting in empty strings all the time :(.
#[cfg(test)]
mod tests {
use super::*;
const TEST_CONTENT: &str = "6,10
0,14
9,10
0,3
10,4
4,11
6,0
6,12
4,1
0,13
10,12
3,4
3,0
8,4
1,10
2,14
8,10
9,0
fold along y=7
fold along x=5";
const TEST_POINTS: [(usize, usize); 18] = [
(6, 10),
(0, 14),
(9, 10),
(0, 3),
(10, 4),
(4, 11),
(6, 0),
(6, 12),
(4, 1),
(0, 13),
(10, 12),
(3, 4),
(3, 0),
(8, 4),
(1, 10),
(2, 14),
(8, 10),
(9, 0),
];
const TEST_FOLDS: &[(bool, usize)] = &[(true, 7), (false, 5)];
const TEST_RESULT: &str = "#####\n\
#...#\n\
#...#\n\
#...#\n\
#####\n";
#[test]
fn test_parse() {
let (points, folds) = parse(&TEST_CONTENT);
assert_eq!(HashSet::from(TEST_POINTS), points);
assert_eq!(TEST_FOLDS, folds);
}
#[test]
fn test_solution_1() {
let (points, folds) = parse(&TEST_CONTENT);
assert_eq!(17, solution_1(&points, &folds));
}
#[test]
fn test_solution_2() {
let (points, folds) = parse(&TEST_CONTENT);
assert_eq!(TEST_RESULT, solution_2(&points, &folds));
}
}
Rust solution to AoC|2021|14.
Ough! Today’s was the biggest headache so far this year.
I was aware when starting part 1 that a solution based on arrays will probably not solve part 2. It was good enough for part 1 though.
But while implementing it, I was sure I will need linked lists for part 2 and I was afraid of it, because I really don’t know how to implement linked lists in Rust and the LinkedList
Rust comes with did not help a lot. So I spent quite some time trying to implement a linked list in Rust and — again — failed. Switched to Java, where linked lists are no pain at all, just to realize that no list based solution will work at all for today.
With a little bit of thinking on the problem, I realized that it would be enough to just count the pairs. It does not really matter where they appear in the polymer. With this idea, the solution was not very complicated…
Parse the input:
/// porse content into a vector of chars and a map with pairs of chars as keys and the
/// char to be inserted as value
pub fn parse(content: &str) -> (Vec<char>, HashMap<(char, char), char>) {
let mut parts = content.split("\n\n");
let template = parts
.next()
.expect("No template")
.chars()
.collect::<Vec<_>>();
let rules = parts
.next()
.expect("No rules")
.lines()
.map(|line| {
let mut chars = line.chars();
let a = chars.next().expect("No first char");
let b = chars.next().expect("No second char");
let c = chars.skip(4).next().expect("No resulting char");
((a, b), c)
})
.collect::<HashMap<_, _>>();
(template, rules)
}
Simulate rounds of insertions
/// simulate given number of rounds starting from polymer template and using given
/// rules
pub fn simulate_rounds(
template: &[char],
rules: &HashMap<(char, char), char>,
rounds: usize,
) -> u64 {
// map of pairs to number of occurances of those
let mut pairs: HashMap<_, u64> = HashMap::new();
for (c1, c2) in template.iter().zip(template.iter().skip(1)) {
*pairs.entry((*c1, *c2)).or_insert(0) += 1;
}
// in each round, update pairs
for _ in 0..rounds {
let mut upd = HashMap::new();
for ((c1, c2), cnt0) in &pairs {
if let Some(d) = rules.get(&(*c1, *c2)) {
// if pair is found in rules, replace (c1, c2) by (c1, d) and (d, c2)
*upd.entry((*c1, *d)).or_insert(0) += cnt0;
*upd.entry((*d, *c2)).or_insert(0) += cnt0;
} else {
// if pair is not found in rules, keep pair
*upd.entry((*c1, *c2)).or_insert(0) += cnt0;
}
}
pairs = upd;
}
// count symbols in pairs
// each symbol contributes to two pairs except for the first and the last symbol
// start with count = 1 for first and last symbol to consistently count every symbol
// twice
let mut counts = HashMap::new();
counts.insert(template[0], 1);
counts.insert(template[template.len() - 1], 1);
for ((c1, c2), cnt) in &pairs {
*counts.entry(*c1).or_insert(0) += cnt;
*counts.entry(*c2).or_insert(0) += cnt;
}
// get (twice the) count for most and less frequent symbol
let min = counts.values().min().expect("No min");
let max = counts.values().max().expect("No max");
// return difference, divide by 2 because every symbol is counted twice
(max - min) / 2
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "NNCB
CH -> B
HH -> N
CB -> H
NH -> C
HB -> C
HC -> B
HN -> C
NN -> C
BH -> H
NC -> B
NB -> B
BN -> B
BB -> N
BC -> B
CC -> N
CN -> C";
const TEST_TEMPLATE: [char; 4] = ['N', 'N', 'C', 'B'];
const TEST_RULES: [((char, char), char); 16] = [
(('C', 'H'), 'B'),
(('H', 'H'), 'N'),
(('C', 'B'), 'H'),
(('N', 'H'), 'C'),
(('H', 'B'), 'C'),
(('H', 'C'), 'B'),
(('H', 'N'), 'C'),
(('N', 'N'), 'C'),
(('B', 'H'), 'H'),
(('N', 'C'), 'B'),
(('N', 'B'), 'B'),
(('B', 'N'), 'B'),
(('B', 'B'), 'N'),
(('B', 'C'), 'B'),
(('C', 'C'), 'N'),
(('C', 'N'), 'C'),
];
#[test]
fn test_parse() {
let (template, rules) = parse(&CONTENT);
assert_eq!(Vec::from(TEST_TEMPLATE), template);
assert_eq!(HashMap::from(TEST_RULES), rules);
}
#[test]
fn test_solution_1() {
assert_eq!(
1_588,
solution_1(&Vec::from(TEST_TEMPLATE), &HashMap::from(TEST_RULES))
);
}
}
If Eric Wastl says "This polymer grows quickly" and the examples provided in the puzzle description produce numbers > 2 Trillions, there is probably no list based implementation (be it linked or array based lists) that efficiently solve the puzzle.
Rust solution to AoC|2021|15.
When looking at the puzzle, I thought: "Obviously Dijkstra". Then I implemented a solution which I thought was Dijskstra while it actually was not. It still worked.
I realized my implementation was not Dijkstra when I tried to extend to A* to figure out whether this improves performance.
Hence, as a second step, I implemented Dijkstra. This works as well (obvisouly) but performs worse than my initial solution. For my Dijkstra implementation, A* (with the length of the shortest path to the target width - 1 - x + height - 1 - y
as heuristic) does not perform any better.
The reason why my first solution works is the structure of the problem: the weight of an edge to any coordinate (x, y) is independent of the neighbor from where we reach the coordinate. Hence, if a coordinate is reached once in the algorithm, it is not possible to reach it later on with a lower overall risk. That essentially means: there will never be any decrease keys. This can be confirmed in the Dijkstra implementation by using feature dijkstra_panic_decrease
which causes the program to panic on an attempt to decrease a key. Run puzzle with cargo run --release --features dijkstra_panic_release
.
The simple solution (my first solution) is as follows:
pub fn solve(grid: &[usize], w: usize, n: usize) -> usize {
let h = grid.len() / w;
let mut heap = BinaryHeap::new();
let mut settled = vec![false; grid.len() * n * n];
heap.push((usize::MAX, 0)); // nodes are tuples (usize::MAX - risk, idx)
settled[0] = true;
while let Some((risk, idx)) = heap.pop() {
if idx == grid.len() * n * n - 1 {
return usize::MAX - risk; // target reached
}
let (x, y) = (idx % (w * n), idx / (w * n));
for (x_a, y_a) in [
(x + 1, y),
(x, y + 1),
(x.wrapping_sub(1), y),
(x, y.wrapping_sub(1)),
] {
if x_a >= w * n || y_a >= h * n || settled[x_a + y_a * w * n] {
continue; // out of bounds or visisted
}
let risk = risk - ((grid[x_a % w + y_a % h * w] + x_a / w + y_a / h - 1) % 9) - 1;
settled[x_a + y_a * w * n] = true; // first visit settles
heap.push((risk, x_a + y_a * w * n));
}
}
panic!("No path found");
}
In the simple solution, I "invert" the risk (usize::MAX - risk
) because BinaryHeap
returns the biggest element first.
The Dijkstra implementatino looks as follows:
pub fn solve(grid: &[usize], w: usize, n: usize) -> usize {
let h = grid.len() / w;
let mut heap = BTreeSet::new();
let mut settled = vec![false; w * h * n * n];
let mut risks = vec![None; w * h * n * n];
heap.insert((0, 0)); // nodes are tuples (risk, idx)
risks[0] = Some(0); // keep track of best risks so far
settled[0] = true; // flag nodes which are settled
while let Some((risk, idx)) = heap.pop_first() {
if idx == grid.len() * n * n - 1 {
return risk; // target reached
}
settled[idx] = true; // no further improvement possible
let (x, y) = (idx % (w * n), idx / (w * n));
for (x_a, y_a) in [
(x + 1, y),
(x, y + 1),
(x.wrapping_sub(1), y),
(x, y.wrapping_sub(1)),
] {
let idx_a = x_a + y_a * w * n;
if x_a >= w * n || y_a >= h * n || settled[idx_a] {
continue; // out of bounds or settled
}
let risk_upd =
risk + ((grid[x_a % w + y_a % h * w] + x_a / w + y_a / h - 1) % 9) + 1;
if let Some(risk_cur) = risks[idx_a] {
if risk_upd >= risk_cur {
continue; // no improvement
}
if cfg!(feature = "dijkstra_panic_decrease") {
unreachable!("Decrese key can never happen");
}
heap.remove(&(risk_cur, idx_a)); // improved path to node seen previously
}
heap.insert((risk_upd, idx_a));
risks[idx_a] = Some(risk_upd);
}
}
panic!("No path found");
}
I use a BTreeSet
as heap. Decrease key is implemented as remove old node and insert new node (but actually never used, see above).
The solution only compiles using the rust nightly channel because BTreeSet::pop_first
is unstable. The simple version runs on rust stable.
Run the Dijkstra version with cargo run --release --features dijkstra
.
///
/// ```
/// # use mr_kaffee_2021_15::*;
/// assert_eq!((vec![0, 1, 2, 3], 2), parse("01\n23"));
/// ```
pub fn parse(content: &str) -> (Vec<usize>, usize) {
(
content
.chars()
.filter(|c| c.is_ascii_digit())
.map(|c| c as usize - '0' as usize)
.collect(),
content.lines().next().expect("No data").len(),
)
}
#[cfg(test)]
mod tests {
use super::*;
const TEST_CONTENT: &str = "1163751742
1381373672
2136511328
3694931569
7463417111
1319128137
1359912421
3125421639
1293138521
2311944581";
const TEST_GRID: &'static [usize] = &[
1, 1, 6, 3, 7, 5, 1, 7, 4, 2, 1, 3, 8, 1, 3, 7, 3, 6, 7, 2, 2, 1, 3, 6, 5, 1, 1, 3, 2, 8,
3, 6, 9, 4, 9, 3, 1, 5, 6, 9, 7, 4, 6, 3, 4, 1, 7, 1, 1, 1, 1, 3, 1, 9, 1, 2, 8, 1, 3, 7,
1, 3, 5, 9, 9, 1, 2, 4, 2, 1, 3, 1, 2, 5, 4, 2, 1, 6, 3, 9, 1, 2, 9, 3, 1, 3, 8, 5, 2, 1,
2, 3, 1, 1, 9, 4, 4, 5, 8, 1,
];
const TEST_WIDTH: usize = 10;
#[test]
fn test_parse() {
let (grid, width) = parse(TEST_CONTENT);
assert_eq!(TEST_GRID, grid);
assert_eq!(TEST_WIDTH, width);
}
#[test]
fn test_solution_1() {
assert_eq!(40, solution_1(&TEST_GRID, TEST_WIDTH))
}
#[test]
fn test_solution_2() {
assert_eq!(315, solution_2(&TEST_GRID, TEST_WIDTH))
}
}
A possible way to implement Dijkstra on a simple graph with Rust.
Remember how Dijkstra and A* really work and understand that Dijkstra is not always needed depending on the structure of the problem to solve.
Use features for conditional compilation and use unstable Rust features.
Unfortunately, I also learned that the Rust collections in std::collections
offer a quite poor interface. The function BTreeMap::pop_min
being unstable is sad.
Rust solution to AoC|2021|16.
A lot to read today. But then it is just following the instructions.
I implemented a structure Packets
to keep track of the current position and read numbers from the bits & bytes (I am particularly proud that I got the bit ordering right from the beginning):
pub struct Packets {
data: Vec<u8>,
pos: usize,
}
impl Packets {
pub const SUM: usize = 0;
pub const PRODUCT: usize = 1;
pub const MIN: usize = 2;
pub const MAX: usize = 3;
pub const VALUE: usize = 4;
pub const GREATER: usize = 5;
pub const LESS: usize = 6;
pub const EQUAL: usize = 7;
pub const N_VERSION: usize = 3;
pub const N_TYPE_ID: usize = 3;
pub const N_FLAG: usize = 1;
pub const N_VALUE_PART: usize = 4;
pub const N_LENGTH_TYPE_ID: usize = 1;
pub const N_LENGTH: [usize; 2] = [15, 11];
pub fn from(bytes: &[u8]) -> Self {
Packets {
data: Vec::from(bytes),
pos: 0,
}
}
pub fn read_number(&mut self, len: usize) -> u64 {
let mut v = 0;
for _ in 0..len {
v = v << 1 | ((self.data[self.pos >> 2] >> (!self.pos & 3)) & 1) as u64;
self.pos += 1;
}
v
}
pub fn skip(&mut self, len: usize) {
self.pos += len;
}
}
The input is parsed into that structure with
///
/// ```
/// # use mr_kaffee_2021_16::*;
/// assert_eq!(Packets::from(&[0, 1, 10]), parse("01A"));
/// ```
pub fn parse(content: &str) -> Packets {
let data = content
.trim()
.chars()
.map(|c| match c {
'0'..='9' => c as u8 - '0' as u8,
'A'..='F' => c as u8 - 'A' as u8 + 10,
_ => panic!("Illegal character: {}", c),
})
.collect::<Vec<_>>();
Packets::from(&data)
}
Then, part 1 is solved with the function sum_versions
which calls itself recursively:
pub fn sum_versions(packets: &mut Packets) -> u64 {
let mut versions_sum = packets.read_number(Packets::N_VERSION);
let type_id = packets.read_number(Packets::N_TYPE_ID) as usize;
if type_id == Packets::VALUE {
loop {
let flag = packets.read_number(Packets::N_FLAG);
packets.skip(Packets::N_VALUE_PART); // skip value
if flag == 0 {
break; // last part's flag is 0
}
}
} else {
let length_type_id = packets.read_number(Packets::N_LENGTH_TYPE_ID) as usize;
let length = packets.read_number(Packets::N_LENGTH[length_type_id]) as usize;
if length_type_id == 1 {
for _ in 0..length {
versions_sum += sum_versions(packets);
}
} else {
let target = packets.pos + length;
while packets.pos < target {
versions_sum += sum_versions(packets);
}
}
}
versions_sum
}
For part two the function sum_versions
is modified to take into account the operators. The new function process_packet
reads:
pub fn process_packet(packets: &mut Packets) -> u64 {
packets.skip(Packets::N_VERSION); // skip version
let type_id = packets.read_number(Packets::N_TYPE_ID) as usize;
if type_id == 4 {
let mut value = 0;
loop {
let flag = packets.read_number(Packets::N_FLAG);
value = value << 4 | packets.read_number(Packets::N_VALUE_PART);
if flag == 0 {
break; // last part's flag is 0
}
}
value
} else {
let length_type_id = packets.read_number(Packets::N_LENGTH_TYPE_ID) as usize;
let length = packets.read_number(Packets::N_LENGTH[length_type_id]) as usize;
let mut values = Vec::new();
if length_type_id == 1 {
for _ in 0..length {
values.push(process_packet(packets));
}
} else {
let target = packets.pos + length;
while packets.pos < target {
values.push(process_packet(packets));
}
}
match type_id {
Packets::SUM => values.iter().sum(),
Packets::PRODUCT => values.iter().product(),
Packets::MIN => *values.iter().min().expect("No min"),
Packets::MAX => *values.iter().max().expect("No max"),
Packets::GREATER => (values[0] > values[1]) as u64,
Packets::LESS => (values[0] < values[1]) as u64,
Packets::EQUAL => (values[0] == values[1]) as u64,
_ => panic!("Illegal type ID: {}", type_id),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT_1: &str = "8A004A801A8002F478";
const CONTENT_2: &str = "620080001611562C8802118E34";
const CONTENT_3: &str = "C0015000016115A2E0802F182340";
const CONTENT_4: &str = "A0016C880162017C3686B18A3D4780";
const DATA_0: &[u8] = &[0xD, 0x2, 0xF, 0xE, 0x2, 0x8];
const DATA_2: &[u8] = &[
6, 2, 0, 0, 8, 0, 0, 0, 1, 6, 1, 1, 5, 6, 2, 0xC, 8, 8, 0, 2, 1, 1, 8, 0xE, 3, 4,
];
const SUM_0: u64 = 6;
const SUM_1: u64 = 16;
const SUM_2: u64 = 12;
const SUM_3: u64 = 23;
const SUM_4: u64 = 31;
#[test]
fn test_parse() {
assert_eq!(Packets::from(DATA_2), parse(CONTENT_2));
}
#[test]
fn test_solution_1() {
assert_eq!(SUM_0, solution_1(&mut Packets::from(DATA_0)));
assert_eq!(SUM_1, solution_1(&mut parse(CONTENT_1)));
assert_eq!(SUM_2, solution_1(&mut Packets::from(DATA_2)));
assert_eq!(SUM_3, solution_1(&mut parse(CONTENT_3)));
assert_eq!(SUM_4, solution_1(&mut parse(CONTENT_4)));
}
#[test]
fn test_solution_2() {
assert_eq!(3, solution_2(&mut parse("C200B40A82")));
assert_eq!(7, solution_2(&mut parse("880086C3E88112")));
assert_eq!(1, solution_2(&mut parse("9C0141080250320F1802104A08")));
}
}
Rust solution to AoC|2021|17.
Today it is about Projectile (or Probe) motion and calculating initial conditions to reach a target. I guess, it would be possible to calculate an analytical solution for both parts by solving quadratic equations and than re-constructing the integer solutions, but this is not the approach I took.
Note: My solution relies on the fact that the target lies to the right and below the starting point (the below part is actually important).
For part 1, I only consider the y
-coordinate. If the probe is sent off with some initial velocity vy0
, it will come back to the initial y
cordinate with velocity -vy0-1
after some time. The next y
coordinate seen is thus vy0+1
units below the initial y
-coordinate. If the initial velocity is chosen such that this coordinate is just within the target region, this yields the trajectory with the highest possible maximum. See the comments in the code below for some more details.
pub fn solution_1(target: (isize, isize, isize, isize)) -> isize {
let (x_min, x_max, y_min, y_max) = target;
assert!(
x_min > 0 && x_max > x_min && y_max < 0 && y_min < y_max,
"Solution assumes 0 < x_min < x_max and y_min < y_max < 0"
);
// max is reached if x coordinate ends up in target
// for given initial velocity vy0
//
// the max is reached at step k = vy0
// the max value is vy0 * (vy0 + 1) / 2
//
// at step k = 2 * vy0 + 1 the initial value y = 0 is reached again
// with velocity vy = -vy0 - 1
//
// the next step reaches
// y = -vy0 - 1
//
// the highest max is reached, if y = -vy0 - 1 is just within the target
// zone, i.e., if y_min = -vy0 - 1 or vy0 = -y_min - 1
let vy0 = -y_min - 1;
vy0 * (vy0 + 1) / 2
}
Of course, I also need to parse the input:
///
/// ```
/// # use mr_kaffee_2021_17::*;
/// assert_eq!((-1, 2, 0, 10), parse("target area: x=-1..2, y=0..10"));
/// ```
pub fn parse(content: &str) -> (isize, isize, isize, isize) {
let mut parts = content.trim().split(", ");
let lhs = parts.next().expect("No lhs");
let rhs = parts.next().expect("No rhs");
let x_range = lhs.split("=").skip(1).next().expect("No x range");
let y_range = rhs.split("=").skip(1).next().expect("No y range");
let mut x_parts = x_range.split("..");
let mut y_parts = y_range.split("..");
let x1 = x_parts
.next()
.expect("No x1")
.parse()
.expect("Cound not parse x1");
let x2 = x_parts
.next()
.expect("No x2")
.parse()
.expect("Cound not parse x2");
let y1 = y_parts
.next()
.expect("No y1")
.parse()
.expect("Cound not parse y1");
let y2 = y_parts
.next()
.expect("No y2")
.parse()
.expect("Cound not parse y2");
(x1, x2, y1, y2)
}
In part 2, it is about figuring out which limits for the initial conditions to check. See the comments in the code for what I came up with.
pub fn solution_2(target: (isize, isize, isize, isize)) -> usize {
let (x_min, x_max, y_min, y_max) = target;
assert!(
x_min > 0 && x_max > x_min && y_max < 0 && y_min < y_max,
"Solution assumes 0 < x_min < x_max and y_min < y_max < 0"
);
// vx[0] = vx0
// run for vx0 steps
// end at vx0 + (vx0 - 1) + ... = vx0 * (vx0 + 1) / 2
// find smallest absolute vx0 so that target is reached
let mut vx_min = 0;
while vx_min * (vx_min + 1) / 2 <= x_min {
vx_min += 1;
}
// if vx0 is above this value, the first step overshoots
let vx_max = x_max;
// if below, first step ends below target
let vy_min = y_min;
// max from part 1
let vy_max = -y_min - 1;
let mut count = 0;
for vx in vx_min..=vx_max {
for vy in vy_min..=vy_max {
if is_target_reached(target, vx as usize, vy) {
count += 1;
}
}
}
count
}
For each candidate initial condition, I simulate the trajectory to check whether it reaches the target:
pub fn is_target_reached(target: (isize, isize, isize, isize), vx: usize, vy: isize) -> bool {
let mut x = 0;
let mut y = 0;
let mut vx = vx as isize;
let mut vy = vy;
while (vy > 0 || y >= target.2) && x <= target.1 { // while target can be reached
x = x + vx;
y = y + vy;
vx = if vx > 0 { vx - 1 } else { vx };
vy = vy - 1;
if x >= target.0 && x <= target.1 && y >= target.2 && y <= target.3 {
return true;
}
}
false
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "target area: x=20..30, y=-10..-5";
const TEST_TARGET: (isize, isize, isize, isize) = (20, 30, -10, -5);
#[test]
fn test_parse() {
assert_eq!(TEST_TARGET, parse(CONTENT));
}
#[test]
fn test_solution_1() {
assert_eq!(45, solution_1(TEST_TARGET));
}
#[test]
fn test_solution_2() {
assert_eq!(112, solution_2(TEST_TARGET));
}
#[test]
fn test_is_target_reached() {
let vs = &[
(23, -10),
(25, -9),
(27, -5),
(29, -6),
(22, -6),
(21, -7),
(9, 0),
(27, -7),
(24, -5),
(25, -7),
(26, -6),
(25, -5),
(6, 8),
(11, -2),
(20, -5),
(29, -10),
(6, 3),
(28, -7),
(8, 0),
(30, -6),
(29, -8),
(20, -10),
(6, 7),
(6, 4),
(6, 1),
(14, -4),
(21, -6),
(26, -10),
(7, -1),
(7, 7),
(8, -1),
(21, -9),
(6, 2),
(20, -7),
(30, -10),
(14, -3),
(20, -8),
(13, -2),
(7, 3),
(28, -8),
(29, -9),
(15, -3),
(22, -5),
(26, -8),
(25, -8),
(25, -6),
(15, -4),
(9, -2),
(15, -2),
(12, -2),
(28, -9),
(12, -3),
(24, -6),
(23, -7),
(25, -10),
(7, 8),
(11, -3),
(26, -7),
(7, 1),
(23, -9),
(6, 0),
(22, -10),
(27, -6),
(8, 1),
(22, -8),
(13, -4),
(7, 6),
(28, -6),
(11, -4),
(12, -4),
(26, -9),
(7, 4),
(24, -10),
(23, -8),
(30, -8),
(7, 0),
(9, -1),
(10, -1),
(26, -5),
(22, -9),
(6, 5),
(7, 5),
(23, -6),
(28, -10),
(10, -2),
(11, -1),
(20, -9),
(14, -2),
(29, -7),
(13, -3),
(23, -5),
(24, -8),
(27, -9),
(30, -7),
(28, -5),
(21, -10),
(7, 9),
(6, 6),
(21, -5),
(27, -10),
(7, 2),
(30, -9),
(21, -8),
(22, -7),
(24, -9),
(20, -6),
(6, 9),
(29, -5),
(8, -2),
(27, -8),
(30, -5),
(24, -7),
];
for (vx0, vy0) in vs {
assert!(
is_target_reached(TEST_TARGET, *vx0, *vy0),
"Failed at {}, {}",
vx0,
vy0
);
}
}
}
Rust solution to AoC|2021|18.
Parse the snail numbers into vectors of tokens.
Implement functions that apply explode and split steps until no further reduction possible
Calculate the sum over all snail numbers
Calculate magnitude
/// holds tokens of a snail number
#[derive(PartialEq, Eq, Debug, Clone, Copy)]
pub enum Token {
Op,
Cl,
Val(usize),
}
/// # Examples
/// ```
/// # use mr_kaffee_2021_18::*;
/// assert_eq!(
/// vec![
/// vec![
/// Token::Op,
/// Token::Op,
/// Token::Val(0),
/// Token::Val(1),
/// Token::Cl,
/// Token::Val(1),
/// Token::Cl
/// ]
/// ],
/// parse("[[0,1],1]"));
/// ```
#[cfg(not(feature = "recursive"))]
pub fn parse(content: &str) -> Vec<Vec<Token>> {
content
.lines()
.map(|line| {
line.chars()
.filter(|c| !c.is_whitespace() && *c != ',')
.map(|c| match c {
'[' => Token::Op,
']' => Token::Cl,
'0'..='9' => Token::Val(c as usize - '0' as usize),
_ => panic!("Unexpected character: {}", c),
})
.collect::<Vec<_>>()
})
.collect()
}
/// perform single explode
///
/// returns true if any explode performed, otherwise false
pub fn explode(snail: &mut Vec<Token>) -> bool {
let mut state = (None, 0);
for k in 0..snail.len() {
state = match snail[k] {
Token::Op => (None, state.1 + 1),
Token::Cl => (None, state.1 - 1),
Token::Val(val) if state.0.is_some() && state.1 == 5 => {
if let Some((j, x)) = (0..k - 2).rev().find_map(|j| match snail[j] {
Token::Val(v) => Some((j, Token::Val(v + state.0.unwrap()))),
_ => None,
}) {
snail[j] = x;
}
if let Some((j, x)) = (k + 2..snail.len()).find_map(|j| match snail[j] {
Token::Val(v) => Some((j, Token::Val(v + val))),
_ => None,
}) {
snail[j] = x;
}
// remove elements k-1, k, k+1
snail.drain(k - 1..k + 2);
snail[k - 2] = Token::Val(0);
return true;
}
Token::Val(val) => (Some(val), state.1),
}
}
false
}
/// perform single split
///
/// returns true if any split performed, otherwise false
pub fn split(snail: &mut Vec<Token>) -> bool {
if let Some((k, v)) = snail.iter().enumerate().find_map(|(k, &e)| match e {
Token::Val(v) if v > 9 => Some((k, v)),
_ => None,
}) {
let a = v >> 1;
let b = v - a;
snail.splice(
k..k + 1,
[Token::Op, Token::Val(a), Token::Val(b), Token::Cl],
);
return true;
}
false
}
/// reduce snail number
pub fn reduce(snail: &mut Vec<Token>) {
while explode(snail) || split(snail) {}
}
/// calculate sum over snailnumbers at given indices
pub fn sum(snail: &[Vec<Token>], mut indices: impl Iterator<Item = usize>) -> Vec<Token> {
let mut sum = snail[indices.next().expect("Empty indices")].to_owned();
for k in indices {
sum.insert(0, Token::Op);
sum.extend(&snail[k]);
sum.push(Token::Cl);
reduce(&mut sum);
}
sum
}
/// recursively calculate magnitude for snailnumber at given position
///
/// returns magnitude and position
pub fn magnitude(snail: &[Token], k: usize) -> (usize, usize) {
let (a, j) = match snail[k + 1] {
Token::Op => magnitude(&snail, k + 1),
Token::Val(v) => (v, k + 2),
Token::Cl => panic!("Unexpected close tag at {}", k + 1),
};
let (b, j) = match snail[j] {
Token::Op => magnitude(&snail, j),
Token::Val(v) => (v, j + 1),
Token::Cl => panic!("Unexpected close tag at {}", j),
};
(3 * a + 2 * b, j + 1)
}
Simple extension
#[cfg(not(feature = "recursive"))]
pub fn solution_2(snails: &[Vec<Token>]) -> usize {
let mut max = 0;
for k1 in 0..snails.len() {
for k2 in 0..snails.len() {
if k1 == k2 {
continue;
}
let sum = sum(snails, [k1, k2].into_iter());
let (m, _) = magnitude(&sum, 0);
if m > max {
max = m;
}
}
}
max
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_explode() {
let mut snails = parse("[[[[[9,8],1],2],3],4]\n[[[[0,9],2],3],4]");
assert!(explode(&mut snails[0]), "Did not explode");
assert_eq!(snails[0], snails[1]);
let mut snails =
parse("[[[[[4,3],4],4],[7,[[8,4],9]]],[1,1]]\n[[[[0,7],4],[7,[[8,4],9]]],[1,1]]");
assert!(explode(&mut snails[0]));
assert_eq!(snails[0], snails[1]);
}
#[test]
fn test_split() {
let mut snails = vec![
vec![
Token::Op,
Token::Op,
Token::Val(1),
Token::Val(15),
Token::Cl,
Token::Val(11),
Token::Cl,
],
vec![
Token::Op,
Token::Op,
Token::Val(1),
Token::Op,
Token::Val(7),
Token::Val(8),
Token::Cl,
Token::Cl,
Token::Val(11),
Token::Cl,
],
];
assert!(split(&mut snails[0]), "Did not split");
assert_eq!(snails[0], snails[1]);
}
#[test]
fn test_reduce() {
let mut snails =
parse("[[[[[4,3],4],4],[7,[[8,4],9]]],[1,1]]\n[[[[0,7],4],[[7,8],[6,0]]],[8,1]]");
reduce(&mut snails[0]);
assert_eq!(snails[0], snails[1]);
}
#[test]
fn test_sum() {
let snails = parse(
"[[[0,[5,8]],[[1,7],[9,6]]],[[4,[1,2]],[[1,4],2]]]
[[[5,[2,8]],4],[5,[[9,9],0]]]
[6,[[[6,2],[5,6]],[[7,6],[4,7]]]]
[[[6,[0,7]],[0,9]],[4,[9,[9,0]]]]
[[[7,[6,4]],[3,[1,3]]],[[[5,5],1],9]]
[[6,[[7,3],[3,2]]],[[[3,8],[5,7]],4]]
[[[[5,4],[7,7]],8],[[8,3],8]]
[[9,3],[[9,9],[6,[4,9]]]]
[[2,[[7,7],7]],[[5,8],[[9,3],[0,2]]]]
[[[[5,2],5],[8,[3,7]]],[[5,[7,5]],[4,4]]]",
);
let sum = sum(&snails, 0..snails.len());
assert_eq!(
parse("[[[[6,6],[7,6]],[[7,7],[7,0]]],[[[7,7],[7,7]],[[7,8],[9,9]]]]")[0],
sum
);
}
#[test]
fn test_magnitude() {
let snails = parse("[[[[6,6],[7,6]],[[7,7],[7,0]]],[[[7,7],[7,7]],[[7,8],[9,9]]]]");
let (m, j) = magnitude(&snails[0], 0);
assert_eq!(4140, m);
assert_eq!(snails[0].len(), j);
}
}
The structure calls for a recursive data structure. I might give it a try later on.
For now, the puzzle is solved. Not a solution I am particularly proud of. Specifically, I don’t like that it increases the overall runtime for all my solutions for this year so far by 40%!
I have a solution based on recursive data structures. It is three times slower… Made some small optimizations in the vec based solution instead (use drain
and splice
instead of removing/inserting values one by one - see above).
The recursive solution is also available as a variant and you can run the recursive solution with cargo run --release --features recursive
.
#[derive(Clone, PartialEq, Eq)]
pub enum Snail {
Val(usize),
Pair(Box<Snail>, Box<Snail>),
}
impl fmt::Display for Snail {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Val(v) => write!(f, "{}", v),
Self::Pair(lhs, rhs) => write!(f, "[{},{}]", lhs, rhs),
}
}
}
impl fmt::Debug for Snail {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Val(v) => write!(f, "{}", v),
Self::Pair(lhs, rhs) => write!(f, "[{:?},{:?}]", lhs, rhs),
}
}
}
impl Snail {
/// parse a string representation of a snail number
///
/// # Examples
/// ```
/// # use mr_kaffee_2021_18::recursive::*;
/// let snail = Snail::parse("[[1,2]\n, 3]");
/// assert_eq!("[[1,2],3]", format!("{}", snail));
/// ```
pub fn parse(data: &str) -> Self {
Self::parse_recursive(&mut data.chars().filter(|c| !c.is_whitespace()).peekable())
}
fn parse_recursive(data: &mut Peekable<impl Iterator<Item = char>>) -> Self {
let token = data.next();
match token {
Some('[') => {
let lhs = Self::parse_recursive(data);
assert_eq!(Some(','), data.next());
let rhs = Self::parse_recursive(data);
assert_eq!(Some(']'), data.next());
Self::Pair(Box::new(lhs), Box::new(rhs))
}
Some(c) if c >= '0' && c <= '9' => {
let mut v = c as usize - '0' as usize;
while let Some(d) = data
.peek()
.filter(|c| c.is_ascii_digit())
.map(|c| *c as usize - '0' as usize)
{
data.next();
v = 10 * v + d;
}
Self::Val(v)
}
_ => panic!("Unexpected token: {:?}", token),
}
}
fn increment_first(&mut self, inc: usize) {
match self {
Self::Pair(lhs, _) => {
if let Self::Val(val) = *lhs.as_ref() {
let _ = std::mem::replace(lhs, Box::new(Self::Val(val + inc)));
} else {
lhs.increment_first(inc);
}
}
_ => {}
}
}
fn increment_last(&mut self, inc: usize) {
match self {
Self::Pair(_, rhs) => {
if let Self::Val(val) = *rhs.as_ref() {
let _ = std::mem::replace(rhs, Box::new(Self::Val(val + inc)));
} else {
rhs.increment_last(inc);
}
}
_ => {}
}
}
fn explode(&mut self, level: usize) -> (Option<Self>, Option<(usize, usize)>) {
match self {
Self::Val(_) => (None, None),
Self::Pair(lhs, rhs) => {
if level == 4 {
if let (Self::Val(a), Self::Val(b)) = (lhs.as_ref(), rhs.as_ref()) {
(Some(Self::Val(0)), Some((*a, *b)))
} else {
(None, None)
}
} else if level < 4 {
if let (replace, Some((a, b))) = lhs.explode(level + 1) {
if let Self::Val(val) = *rhs.as_ref() {
// increment rhs with b
let _ = std::mem::replace(rhs, Box::new(Self::Val(val + b)));
} else {
// increment first number in rhs with b
rhs.increment_first(b);
}
if let Some(replace) = replace {
// replace exploded value
let _ = std::mem::replace(lhs, Box::new(replace));
}
(None, Some((a, 0)))
} else if let (replace, Some((a, b))) = rhs.explode(level + 1) {
if let Self::Val(val) = *lhs.as_ref() {
// increment lhs with a
let _ = std::mem::replace(lhs, Box::new(Self::Val(val + a)));
} else {
// increment last number in lhs with a
lhs.increment_last(a);
}
if let Some(replace) = replace {
// replace exploded value
let _ = std::mem::replace(rhs, Box::new(replace));
}
(None, Some((0, b)))
} else {
(None, None)
}
} else {
(None, None)
}
}
}
}
fn split(&mut self) -> bool {
if let Self::Pair(lhs, rhs) = self {
if let Self::Val(val) = lhs.as_ref() {
if val > &9 {
let a = Box::new(Self::Val(val >> 1));
let b = Box::new(Self::Val((val >> 1) + (val & 1)));
let _ = std::mem::replace(lhs, Box::new(Self::Pair(a, b)));
return true;
}
}
if lhs.split() {
return true;
}
if let Self::Val(val) = rhs.as_ref() {
if val > &9 {
let a = Box::new(Self::Val(val >> 1));
let b = Box::new(Self::Val((val >> 1) + (val & 1)));
let _ = std::mem::replace(rhs, Box::new(Self::Pair(a, b)));
return true;
}
}
return rhs.split();
}
false
}
pub fn reduce(&mut self) {
loop {
let (_, r) = self.explode(0);
if r.is_none() && !self.split() {
break;
}
}
}
pub fn add(&self, other: &Snail) -> Self {
let mut sum = Self::Pair(Box::new(self.clone()), Box::new(other.clone()));
sum.reduce();
sum
}
pub fn magnitude(&self) -> usize {
match self {
Self::Val(v) => *v,
Self::Pair(a, b) => 3 * a.magnitude() + 2 * b.magnitude(),
}
}
}
... a lot on recursive data structures and the Box
structure in Rust but also how to insert / remove elements from a Vec
as efficiently as possible.
Rust solution to AoC|2021|19.
The solution for today’s puzzle takes ~60ms to complete in my current implementation (optimized from 30 seconds in the first version).
The key for these optimizations is to realize that distances between beacons are invariant under coordinate transformations. Hence, it is possible to determine scanner range overlaps based on distances and only identify the correct transformation afterwards. I have to admit that I did not come up with this idea myself but had to look at other solutions. I took the idea from Moritz Gronbach.
When reading the input, I create a distance map (dists
) which — for every beacon in a scanner’s set — creates a map with distances as keys and lists of beacons at that distance as values.
/// parse scanner
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut lines = s.lines().map(|l| l.trim()).filter(|l| l.len() > 0);
let trimchars: &[char] = &[' ', '-'];
let head = lines.next().ok_or("Empty content")?.trim_matches(trimchars);
let id = head
.strip_prefix("scanner ")
.ok_or_else(|| format!("Illegal header: {}", head))?
.parse()?;
let pos = Coord(0, 0, 0);
let mut beacons = Vec::new();
for line in lines {
beacons.push(line.parse::<Coord>()?);
}
let mut dists = Vec::with_capacity(beacons.len());
for k1 in 0..beacons.len() {
let mut map = HashMap::new();
for k2 in 0..beacons.len() {
let d = beacons[k2].sub(&beacons[k1]).abs();
(*map.entry(d).or_insert(Vec::new())).push(k2);
}
dists.push(map);
}
Ok(Scanner {
id,
pos,
beacons,
dists,
})
}
This is used in a check_overlap
function, which first finds beacons in two scanner’s (self
and other
) ranges, that have at least 12 distances to other beacons in common and stores a list of pairs
of beacons that are potentially the same (a bit of extra thinking is required for the case that more than one beacon is at the same distance)
Using this list of pairs
, I then search for a matching transformation / offset to map at least 12 beacons in the two scanner’s ranges.
/// check whether ``self`` overlaps with ``other``
///
/// find the transformation ``t`` and the offset ``o`` such that the coordinate system
/// of ``other`` is transformed into the coordinate system of ``self`` as ``t(c) + o``,
/// i.e., after calling ``other.transform(t, &o)``, the same beacons have the same
/// coordinates in both scanners.
pub fn check_overlap(&self, other: &Self) -> Option<(fn(Coord) -> Coord, Coord)> {
for i1 in 0..=self.len() - Self::THRESHOLD {
for j1 in 0..=other.len() - Self::THRESHOLD {
let mut cnt = 0;
let mut pairs = Vec::new();
for (d, is) in &self.dists[i1] {
if let Some(js) = other.dists[j1].get(d) {
cnt += cmp::min(is.len(), js.len());
for i2 in is {
for j2 in js {
pairs.push((*i2, *j2));
}
}
}
}
if cnt < Self::THRESHOLD {
continue;
}
// beacon i1 from self and beacon j1 from other have at least Self::THRESHOLD
// distances to other beacons in common
for t in TRAFOS {
for k0 in 0..=pairs.len() - Self::THRESHOLD {
let o = self.beacons[pairs[k0].0].sub(&t(other.beacons[pairs[k0].1]));
let cnt = pairs
.iter()
.filter(|(i, j)| self.beacons[*i] == t(other.beacons[*j]).add(&o))
.count();
if cnt >= Self::THRESHOLD {
if cfg!(feature = "sanity-check") {
self.sanity_check(other, t, &o);
}
return Some((t, o));
}
}
}
}
}
None
}
The final piece of the solution is to walk through the scanners and check for overlaps until all scanners are settled. This is done by pushing settled scanners to a stack and then repeatedly popping a scanner from the stack and finding scanners with overlapping range to the settled one until everything is settled. While settling the scanners, the largest distance between any two scanners is stored and a list of unique beacons is populated.
pub fn solution_1_2(scanners: &[Scanner]) -> (usize, usize) {
// use my own mutable copy of the scanners
let mut scanners = scanners.to_owned();
// state
let mut stack = Vec::with_capacity(scanners.len());
let mut settled = vec![false; scanners.len()];
let mut transformed = Vec::with_capacity(scanners.len());
let mut beacons: HashSet<Coord> = HashSet::new();
// init (start with scanner 0)
stack.push(0);
settled[0] = true;
transformed.push(0);
beacons.extend(&scanners[0].beacons);
let mut max_dist = 0;
while let Some(k1) = stack.pop() {
for k2 in 0..scanners.len() {
if settled[k2] {
continue;
}
if let Some((t, o)) = scanners[k1].check_overlap(&scanners[k2]) {
// update scanner to settled coordinates
scanners[k2].transform(t, &o);
// update max distance
for k_t in &transformed {
let dist = scanners[k2].pos.sub(&scanners[*k_t].pos).abs();
if dist > max_dist {
max_dist = dist;
}
}
// push k2 to stack, set settled, add to list of transformed and extend unique beacons
stack.push(k2);
settled[k2] = true;
transformed.push(k2);
beacons.extend(&scanners[k2].beacons);
}
}
}
// return number of beacons and largest distance between any two scanners
(beacons.len(), max_dist)
}
Some details of the implementation include:
A Coord
structure (including code to parse/print coordinates)
/// type alias for 3D coordinate
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy)]
pub struct Coord(isize, isize, isize);
impl Coord {
pub fn add(mut self, rhs: &Self) -> Self {
self.0 += rhs.0;
self.1 += rhs.1;
self.2 += rhs.2;
self
}
pub fn sub(mut self, rhs: &Self) -> Self {
self.0 -= rhs.0;
self.1 -= rhs.1;
self.2 -= rhs.2;
self
}
pub fn abs(&self) -> usize {
(self.0.abs() + self.1.abs() + self.2.abs()) as usize
}
}
impl FromStr for Coord {
type Err = Box<dyn std::error::Error>;
/// parse 3D coordinate
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut vals = s.split(',').map(|v| v.parse());
let x = vals
.next()
.ok_or_else(|| format!("Illegal coordinate: {}; no x value", s))??;
let y = vals
.next()
.ok_or_else(|| format!("Illegal coordinate: {}; no y value", s))??;
let z = vals
.next()
.ok_or_else(|| format!("Illegal coordinate: {}; no z value", s))??;
Ok(Coord(x, y, z))
}
}
impl std::fmt::Display for Coord {
/// print 3D coordinate
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{},{},{}", self.0, self.1, self.2)
}
}
A list of valid transformations
/// transformation functions for all 24 possible transformations
pub const TRAFOS: [fn(Coord) -> Coord; 24] = [
|Coord(x, y, z)| Coord(x, y, z),
|Coord(x, y, z)| Coord(x, -y, -z),
|Coord(x, y, z)| Coord(x, z, -y),
|Coord(x, y, z)| Coord(x, -z, y),
|Coord(x, y, z)| Coord(-x, y, -z),
|Coord(x, y, z)| Coord(-x, -y, z),
|Coord(x, y, z)| Coord(-x, z, y),
|Coord(x, y, z)| Coord(-x, -z, -y),
|Coord(x, y, z)| Coord(y, x, -z),
|Coord(x, y, z)| Coord(y, -x, z),
|Coord(x, y, z)| Coord(y, z, x),
|Coord(x, y, z)| Coord(y, -z, -x),
|Coord(x, y, z)| Coord(-y, x, z),
|Coord(x, y, z)| Coord(-y, -x, -z),
|Coord(x, y, z)| Coord(-y, z, -x),
|Coord(x, y, z)| Coord(-y, -z, x),
|Coord(x, y, z)| Coord(z, x, y),
|Coord(x, y, z)| Coord(z, -x, -y),
|Coord(x, y, z)| Coord(z, y, -x),
|Coord(x, y, z)| Coord(z, -y, x),
|Coord(x, y, z)| Coord(-z, x, -y),
|Coord(x, y, z)| Coord(-z, -x, y),
|Coord(x, y, z)| Coord(-z, y, x),
|Coord(x, y, z)| Coord(-z, -y, -x),
];
A Scanner
structure (including code to parse/print scanners)
#[derive(Debug, Clone)]
pub struct Scanner {
id: usize,
pos: Coord,
beacons: Vec<Coord>,
dists: Vec<HashMap<usize, Vec<usize>>>,
}
impl Scanner {
const THRESHOLD: usize = 12;
pub fn len(&self) -> usize {
self.beacons.len()
}
pub fn transform(&mut self, t: fn(Coord) -> Coord, o: &Coord) {
self.pos = t(self.pos).add(o);
for k in 0..self.len() {
self.beacons[k] = t(self.beacons[k]).add(o);
}
}
/// sanity check for overlapping scanner regions
///
/// for all beacons seen by self / other, check whether they are also seen by other / self
/// if and only if they are in range of other / self
///
/// # Panics
/// if sanity check fails
pub fn sanity_check(&self, other: &Self, t: fn(Coord) -> Coord, o: &Coord) {
// create a copy of other in self's coordinates
let mut other = other.to_owned();
other.transform(t, o);
// loop over beacons seen by self
for b1 in &self.beacons {
let d = other.pos.sub(b1);
let in_range = RANGE.contains(&d.0) && RANGE.contains(&d.1) && RANGE.contains(&d.2);
let contained = other.beacons.contains(b1);
if in_range != contained {
panic!(
"{} is in range of 2nd scanner {} at {}, but not contained in set or vice versa",
b1, other.id, other.pos
);
}
}
// loop over beacons seen by other
for b2 in &other.beacons {
let d = self.pos.sub(b2);
let in_range = RANGE.contains(&d.0) && RANGE.contains(&d.1) && RANGE.contains(&d.2);
let contained = self.beacons.contains(b2);
if in_range != contained {
panic!(
"{} is in range of 1st scanner {} at {}, but not contained in set or vice versa",
b2, self.id, self.pos
);
}
}
}
/// check whether ``self`` overlaps with ``other``
///
/// find the transformation ``t`` and the offset ``o`` such that the coordinate system
/// of ``other`` is transformed into the coordinate system of ``self`` as ``t(c) + o``,
/// i.e., after calling ``other.transform(t, &o)``, the same beacons have the same
/// coordinates in both scanners.
pub fn check_overlap(&self, other: &Self) -> Option<(fn(Coord) -> Coord, Coord)> {
for i1 in 0..=self.len() - Self::THRESHOLD {
for j1 in 0..=other.len() - Self::THRESHOLD {
let mut cnt = 0;
let mut pairs = Vec::new();
for (d, is) in &self.dists[i1] {
if let Some(js) = other.dists[j1].get(d) {
cnt += cmp::min(is.len(), js.len());
for i2 in is {
for j2 in js {
pairs.push((*i2, *j2));
}
}
}
}
if cnt < Self::THRESHOLD {
continue;
}
// beacon i1 from self and beacon j1 from other have at least Self::THRESHOLD
// distances to other beacons in common
for t in TRAFOS {
for k0 in 0..=pairs.len() - Self::THRESHOLD {
let o = self.beacons[pairs[k0].0].sub(&t(other.beacons[pairs[k0].1]));
let cnt = pairs
.iter()
.filter(|(i, j)| self.beacons[*i] == t(other.beacons[*j]).add(&o))
.count();
if cnt >= Self::THRESHOLD {
if cfg!(feature = "sanity-check") {
self.sanity_check(other, t, &o);
}
return Some((t, o));
}
}
}
}
}
None
}
}
impl FromStr for Scanner {
type Err = Box<dyn std::error::Error>;
/// parse scanner
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut lines = s.lines().map(|l| l.trim()).filter(|l| l.len() > 0);
let trimchars: &[char] = &[' ', '-'];
let head = lines.next().ok_or("Empty content")?.trim_matches(trimchars);
let id = head
.strip_prefix("scanner ")
.ok_or_else(|| format!("Illegal header: {}", head))?
.parse()?;
let pos = Coord(0, 0, 0);
let mut beacons = Vec::new();
for line in lines {
beacons.push(line.parse::<Coord>()?);
}
let mut dists = Vec::with_capacity(beacons.len());
for k1 in 0..beacons.len() {
let mut map = HashMap::new();
for k2 in 0..beacons.len() {
let d = beacons[k2].sub(&beacons[k1]).abs();
(*map.entry(d).or_insert(Vec::new())).push(k2);
}
dists.push(map);
}
Ok(Scanner {
id,
pos,
beacons,
dists,
})
}
}
impl std::fmt::Display for Scanner {
/// print scanner (without offset)
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
writeln!(f, "--- scanner {} ---", self.id)?;
for k in 0..self.len() {
self.beacons[k].sub(&self.pos).fmt(f)?;
writeln!(f)?;
}
Ok(())
}
}
The Scanner
implementation includes a sanity check for overlapping regions:
/// sanity check for overlapping scanner regions
///
/// for all beacons seen by self / other, check whether they are also seen by other / self
/// if and only if they are in range of other / self
///
/// # Panics
/// if sanity check fails
pub fn sanity_check(&self, other: &Self, t: fn(Coord) -> Coord, o: &Coord) {
// create a copy of other in self's coordinates
let mut other = other.to_owned();
other.transform(t, o);
// loop over beacons seen by self
for b1 in &self.beacons {
let d = other.pos.sub(b1);
let in_range = RANGE.contains(&d.0) && RANGE.contains(&d.1) && RANGE.contains(&d.2);
let contained = other.beacons.contains(b1);
if in_range != contained {
panic!(
"{} is in range of 2nd scanner {} at {}, but not contained in set or vice versa",
b1, other.id, other.pos
);
}
}
// loop over beacons seen by other
for b2 in &other.beacons {
let d = self.pos.sub(b2);
let in_range = RANGE.contains(&d.0) && RANGE.contains(&d.1) && RANGE.contains(&d.2);
let contained = self.beacons.contains(b2);
if in_range != contained {
panic!(
"{} is in range of 1st scanner {} at {}, but not contained in set or vice versa",
b2, self.id, self.pos
);
}
}
}
By default, the sanity check is switched off. It is enabled with the feature sanity-check
, i.e., if the puzzle is run with cargo run --release --features sanity-check
. For my puzzle input & the examples, the sanity check passes.
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = include_str!("../test.txt");
#[test]
fn test_parse() {
let scanners = parse(CONTENT);
assert_eq!(5, scanners.len());
assert!(scanners[2].beacons.contains(&Coord(-644, 584, -595)));
assert!(scanners[4].beacons.contains(&Coord(30, -46, -14)));
}
#[test]
fn test_parse_format() {
let scanners = parse(CONTENT);
let text = scanners
.iter()
.map(|s| format!("{}\n", s))
.collect::<String>();
assert_eq!(CONTENT.trim(), text.trim());
}
#[test]
fn test_check_overlap() {
let scanners = parse(CONTENT);
let r = scanners[0].check_overlap(&scanners[1]);
assert!(r.is_some(), "No transformation found");
let (t, d) = r.unwrap();
assert_eq!(Coord(68, -1246, -43), d);
assert_eq!(Coord(-618, -824, -621), t(Coord(686, 422, 578)).add(&d));
}
#[test]
fn test_solution_1_2() {
let scanners = parse(CONTENT);
assert_eq!((79, 3621), solution_1_2(&scanners));
}
}
Rust solution to AoC|2021|20.
There is essentially one trap in today’s puzzle hidden in the instruction … being careful to account for the infinite size of the images: while in the test data, the algorithm starts with .
, my (and I guess everyone else’s) puzzle input’s algorithm starts with #
. Hence, after applying the algorithm once, all pixels extending forever in any direction are set. To handle this, I store the negative of an image in that case. All images produced by odd numbers of update steps will actually be negative images.
I store images in an Image
structure:
/// structure storing image data
///
/// * ``data`` is a list of all set pixels
/// * ``bbox = (x_min, x_max, y_min, y_max)`` is the bounding box of set pixels
/// with exclusive upper bounds
/// * ``neg`` is a flag indicating whether pixels are inverted (i.e., this is a
/// negative image)
#[derive(Debug, Clone)]
pub struct Image {
pub data: Vec<(isize, isize)>,
pub bbox: (isize, isize, isize, isize),
pub neg: bool,
}
I use this to parse the input into an algorithm (vector of booleans) and an image:
/// parse content in algorithm (first line) and image (other non blank lines)
pub fn parse(content: &str) -> (Vec<bool>, Image) {
let mut lines = content.lines();
let algo = lines
.next()
.expect("No algo")
.trim()
.chars()
.map(|c| c == '#')
.collect::<Vec<_>>();
// if algo[0] is set, all pixels extendig in any direction are lit after the first step
// to ensure a finite number of pixels is lit after an odd number of steps, the last element
// of the algorithm must not be set
assert!(
!algo[0] || !algo[algo.len() - 1],
"Algorithm will light infinitely many pixels forever"
);
let mut data = Vec::new();
let mut bbox = (isize::MAX, isize::MAX, isize::MIN, isize::MIN);
let neg = false;
for (y, line) in lines
.map(|line| line.trim())
.filter(|line| line.len() > 0)
.enumerate()
{
for (x, _) in line.chars().enumerate().filter(|(_, c)| c == &'#') {
data.push((x as isize, y as isize));
bbox.0 = cmp::min(bbox.0, x as isize);
bbox.1 = cmp::min(bbox.1, y as isize);
bbox.2 = cmp::max(bbox.2, x as isize + 1);
bbox.3 = cmp::max(bbox.3, y as isize + 1);
}
}
(algo, Image { data, bbox, neg })
}
The main part of the solution is a function that performs single update steps (see Today I learned for an explanation on why I create the intermediate grid
):
/// perform single update step
///
/// create a new image; if ``algo[0]`` is set, the ``neg`` flag of the image will flip
pub fn update_step(algo: &[bool], image: &Image) -> Image {
let mut data = Vec::with_capacity(image.data.len() * 2);
let mut bbox = (isize::MAX, isize::MAX, isize::MIN, isize::MIN);
let neg = algo[0] ^ image.neg;
// it is cheaper to put everything in a grid compared to looking up values repeatedly
// in a set or map
// make the grid big enough to not have to check out of bounds in loop later on
let (x_mn, y_mn, x_mx, y_mx) = image.bbox;
let x0 = x_mn - 2;
let y0 = y_mn - 2;
let w = (x_mx - x_mn) as usize + 4;
let h = (y_mx - y_mn) as usize + 4;
let mut grid = vec![false; w * h];
for (x, y) in &image.data {
grid[(x - x0) as usize + w * (y - y0) as usize] = true;
}
for x in x_mn - 1..x_mx + 1 {
for y in y_mn - 1..y_mx + 1 {
let k = (x - x0) as usize + w * (y - y0) as usize;
let mut idx = 0;
for j in [
k - w - 1,
k - w,
k - w + 1,
k - 1,
k,
k + 1,
k + w - 1,
k + w,
k + w + 1,
] {
idx = idx << 1 | (image.neg ^ grid[j]) as usize;
}
if neg ^ algo[idx] {
data.push((x, y));
bbox.0 = cmp::min(bbox.0, x);
bbox.1 = cmp::min(bbox.1, y);
bbox.2 = cmp::max(bbox.2, x + 1);
bbox.3 = cmp::max(bbox.3, y + 1);
}
}
}
Image { data, bbox, neg }
}
The solution is calculated as follows:
pub fn update_steps(algo: &[bool], image: &Image, steps: usize) -> Image {
assert!(steps >= 1, "Cannot perform less than one update step");
let mut image = update_step(&algo, image);
for _ in 1..steps {
image = update_step(algo, &image);
}
image
}
pub fn solution_1(algo: &[bool], image: &Image) -> usize {
update_steps(algo, image, 2).data.len()
}
pub fn solution_2(algo: &[bool], image: &Image) -> usize {
update_steps(algo, image, 50).data.len()
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str =
"..#.#..#####.#.#.#.###.##.....###.##.#..###.####..#####..#....#..#..##..##\
#..######.###...####..#..#####..##..#.#####...##.#.#..#.##..#.#......#.###\
.######.###.####...#.##.##..#..#..#####.....#.#....###..#.##......#.....#.\
.#..#..##..#...##.######.####.####.#.#...#.......#..#.#.#...####.##.#.....\
.#..#...##.#.##..#...##.#.##..###.#......#.#.......#.#.#.####.###.##...#..\
...####.#..#..#.##.#....##..#.####....##...##..#...#......#.#.......#.....\
..##..####..#...#.#.#...##..#.#..###..#####........#..####......#..#\n\
\
#..#.\n\
#....\n\
##..#\n\
..#..\n\
..###";
#[test]
fn test_parse() {
let (algo, image) = parse(CONTENT);
assert_eq!(512, algo.len());
assert_eq!((0, 0, 5, 5), image.bbox);
assert_eq!(
Vec::from([
(0, 0),
(3, 0),
(0, 1),
(0, 2),
(1, 2),
(4, 2),
(2, 3),
(2, 4),
(3, 4),
(4, 4)
]),
image.data
);
assert!(!image.neg);
}
#[test]
fn test_update_steps() {
let (algo, image) = parse(CONTENT);
assert_eq!(35, update_steps(&algo, &image, 2).data.len());
assert_eq!(3_351, update_steps(&algo, &image, 50).data.len());
}
}
Today I learned a lot about performance.
In my first version, I stored the lit pixels of the image in a hash set with HashSet::contains
calls in the nested loop in my update_step
function. Building a grid
for direct lookup and replacing the hash set by a vector reduced the runtime for my solution (both parts) from ~500ms to ~25ms by a factor of 20! Pretty impressive.
Rust solution to AoC|2021|21.
Part 1 is pretty straight forward:
/// play with a deterministic dice, winning threshold: 1000
///
/// return the looser's score times the number the dice was thrown
pub fn solution_1((mut p1, mut p2): (usize, usize)) -> u64 {
let mut s1 = 0;
let mut s2 = 0;
let mut dice = 0;
let mut t = true;
while s1 < 1000 && s2 < 1000 {
if t {
p1 = (p1 + 3 * dice + 6 - 1) % 10 + 1;
s1 += p1;
} else {
p2 = (p2 + 3 * dice + 6 - 1) % 10 + 1;
s2 += p2;
}
dice += 3;
t = !t;
}
(dice * cmp::min(s1, s2)) as u64
}
For part 2, my first attempt was to use a depth first traversal through play states. I push play states with multiplicities on a stack.
Each state popped from the stack results in 7 new states for the seven different outcomes of throwing a 3-sided dice three times: (1 x 3, 3 x 4, 6 x 5, 7 x 6, 6 x 7, 3 x 8, 1 x 9).
If a state is a win for either player, the win counter is increased. Otherwise the new state with the new multiplicity is pushed to the stack.
I came up with the idea quite quickly. Still took some time to finish part 2 because of a stupid bug: I updated position and score for the 2nd, 3rd, … outcome of throwing a dice not based on the current state but based on the 1st, 2nd, … outcome of throwing a dice…
/// 7 possible outcomes with three dice
///
/// * 3 - 1: (1 1 1)
/// * 4 - 3: (1 1 2) (1 2 1) (2 1 1)
/// * 5 - 6: (1 1 3) (1 2 2) (1 3 1) (2 1 2) (2 2 1) (3 1 1)
/// * 6 - 7: (1 2 3) (1 3 2) (2 1 3) (2 2 2) (2 3 1) (3 1 2) (3 2 1)
/// * 7 - 6: (1 3 3) (2 2 3) (2 3 2) (3 1 3) (3 2 2) (3 3 1)
/// * 8 - 3: (2 3 3) (3 2 3) (3 3 2)
/// * 9 - 1: (3 3 3)
pub const DICE_MULTS: [(usize, u64); 7] =
[(3, 1), (4, 3), (5, 6), (6, 7), (7, 6), (8, 3), (9, 1)];
#[cfg(feature = "stack")]
/// play with dirac quantum dice (stack based)
///
/// return the number of universes won by the player who wins more often
pub fn solution_2((p1, p2): (usize, usize)) -> u64 {
// push states with multiplicty on stack
// since I push to the end and pop from the end, this is depth first search
let mut stack = Vec::new();
stack.push((p1, p2, 0, 0, true, 1));
// win counts
let mut w1 = 0;
let mut w2 = 0;
// while there are unprocessed states
while let Some((p1, p2, s1, s2, t, mult)) = stack.pop() {
// loop over dice outcomes with multiplicity
for (v, n) in DICE_MULTS {
if t {
// player 1's turn
// update position and score
let p1_upd = (p1 + v - 1) % 10 + 1;
let s1_upd = s1 + p1_upd;
if s1_upd >= 21 {
// win
w1 += n * mult;
} else {
// continue to play later
stack.push((p1_upd, p2, s1_upd, s2, false, n * mult));
}
} else {
// player 2's turn
// update position and score
let p2_upd = (p2 + v - 1) % 10 + 1;
let s2_upd = s2 + p2_upd;
if s2_upd >= 21 {
// win
w2 += n * mult;
} else {
// continue to play later
stack.push((p1, p2_upd, s1, s2_upd, true, n * mult));
}
}
}
}
cmp::max(w1, w2)
}
Looking at the solution a bit closer, I figured out, that there are a total of 88200 different play states possible (10 positions per player times 21 scores per player times 2 because for any combination of positions and scores, it might be player 1’s or player 2’s turn). Yet, my stack based solution processes ~17 million items from the stack, i.e., on average, each state is pushed to the stack more than 200 times. There seems to be potential for optimization.
I created a second solution which is based on a flat list of multiplicities for every possible state. I process each state, sorted by the sum of the scores of both players. Because the total score increases in every round played, I am sure that a state processed once will never occur again. Here is my second solution:
#[cfg(not(feature = "stack"))]
/// play with dirac quantum dice (state list based)
///
/// return the number of universes won by the player who wins more often
pub fn solution_2((p1, p2): (usize, usize)) -> u64 {
// index to state multiplicity list
const F_IDX: fn(usize, usize, usize, usize, usize) -> usize =
|p1, p2, s1, s2, t| p1 + 10 * p2 + 100 * s1 + 2100 * s2 + 44100 * t;
// state multiplicity list
let mut mults = vec![0; 10 * 10 * 21 * 21 * 2];
// insert at pos_1 - 1 and pos_2 - 1 because position is zero based internally
mults[F_IDX(p1 - 1, p2 - 1, 0, 0, 0)] = 1;
// win counts
let mut w1 = 0;
let mut w2 = 0;
for sum in 0..=40 {
for s1 in if sum > 20 { sum - 20..=20 } else { 0..=sum } {
let s2 = sum - s1;
for p1 in 0..10 {
for p2 in 0..10 {
let mult_1 = mults[F_IDX(p1, p2, s1, s2, 0)];
let mult_2 = mults[F_IDX(p1, p2, s1, s2, 1)];
if mult_1 == 0 && mult_2 == 0 {
continue; // don't loop if no need
}
for (v, n) in DICE_MULTS {
let p1_upd = (p1 + v) % 10;
let s1_upd = s1 + p1_upd + 1;
if s1_upd >= 21 {
w1 += mult_1 * n;
} else {
mults[F_IDX(p1_upd, p2, s1_upd, s2, 1)] += mult_1 * n;
}
let p2_upd = (p2 + v) % 10;
let s2_upd = s2 + p2_upd + 1;
if s2_upd >= 21 {
w2 += mult_2 * n;
} else {
mults[F_IDX(p1, p2_upd, s1, s2_upd, 0)] += mult_2 * n;
}
}
}
}
}
}
cmp::max(w1, w2)
}
Quick runtime comparison:
Stack based solution: ~450ms
List based solution: ~3ms
List based is faster by a factor 150!
Hence, list based is my default ;)
If you still want to run the stack based solution, you can do so with cargo run --release --features stack
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_play_deterministic() {
assert_eq!(739_785, solution_1((4, 8)))
}
#[test]
fn test_play_dirac() {
assert_eq!(444_356_092_776_315, solution_2((4, 8)))
}
}
Rust solution to AoC|2021|22.
Wow. This one took a while. I implemented part one based on a list. Obviously, this does not work for part 2. So a new idea was required.
I use a structure Cuboid
with one key function for the solution:
/// get count restricted to self recursively as follows
///
/// ``count(cuboids[k] in self) = count(cuboids[k] v self) - sum_i=1^k-1 count(cuboids[i] in (cuboids[k] v self))``
pub fn get_count_in(&self, cuboids: &[Self]) -> i64 {
if let Some(other) = cuboids.last() {
if let Some(i) = other.intersect(self) {
let mut count = if i.on { i.count() } else { 0 };
for k in 1..cuboids.len() {
count -= i.get_count_in(&cuboids[..k]);
}
return count;
}
}
0
}
This function recursively calculates the contribution of a previous reboot step restricted to the current reboot step. By subtracting all the contributions from previous steps, it essentially undoes the previous steps restricted to the area affected by the current step before applying the current step.
The recursion is necessary, because previous steps will in general overlap. So without the recursion, these overlapping areas would be undone several times.
This is all it takes. With this the solution to both parts is calculated as
#[cfg(feature = "recursion")]
pub fn get_on_count(cuboids: &[Cuboid]) -> u64 {
(0..cuboids.len())
.map(|k| Cuboid::UNIVERSE.get_count_in(&cuboids[..=k]))
.sum::<i64>() as u64
}
For part 1, I restrict the cuboids to satisfy the given bounds:
pub fn solution_1(cuboids: &[Cuboid]) -> u64 {
get_on_count(
&cuboids
.iter()
.filter_map(|c| c.intersect(&Cuboid::SMALL_WORLD))
.collect::<Vec<_>>(),
)
}
And there is also a parse function, as usual:
pub fn parse(content: &str) -> Vec<Cuboid> {
let mut cuboids = Vec::new();
for line in content.lines().map(|l| l.trim()).filter(|l| l.len() > 0) {
let on = line.starts_with("on");
let mut ranges = line
.strip_prefix(if on { "on " } else { "off " })
.expect("Bad line prefix")
.split(',')
.zip(["x=", "y=", "z="].into_iter())
.map(|(p, pre)| p.strip_prefix(pre).expect("Bad part prefix").split(".."))
.map(|mut ranges| {
(
ranges.next().expect("No min").parse().unwrap(),
ranges.next().expect("No max").parse().unwrap(),
)
});
let (x_mn, x_mx) = ranges.next().expect("No x range");
let (y_mn, y_mx) = ranges.next().expect("No y range");
let (z_mn, z_mx) = ranges.next().expect("No z range");
cuboids.push(Cuboid {
on,
x_mn,
x_mx,
y_mn,
y_mx,
z_mn,
z_mx,
});
}
cuboids
}
Here is the complete implementation of the Cuboid
structure:
#[derive(PartialEq, Eq, Clone, Copy)]
pub struct Cuboid {
on: bool,
x_mn: i64,
x_mx: i64,
y_mn: i64,
y_mx: i64,
z_mn: i64,
z_mx: i64,
}
impl fmt::Debug for Cuboid {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"{} x={}..{},y={}..{},z={}..{}",
if self.on { "on" } else { "off" },
self.x_mn,
self.x_mx,
self.y_mn,
self.y_mx,
self.z_mn,
self.z_mx
)
}
}
impl Cuboid {
// a ``Cuboid`` representing the small world of part 1
pub const SMALL_WORLD: Self = Self {
on: false,
x_mn: -50,
x_mx: 50,
y_mn: -50,
y_mx: 50,
z_mn: -50,
z_mx: 50,
};
/// a ``Cuboid`` spanning the complete _universe_
/// (very unlikely that the universe is larger than 64bit in any dimension)
pub const UNIVERSE: Self = Self {
on: false,
x_mn: i64::MIN,
x_mx: i64::MAX,
y_mn: i64::MIN,
y_mx: i64::MAX,
z_mn: i64::MIN,
z_mx: i64::MAX,
};
/// get intersection, on flag is copied from self
pub fn intersect(&self, other: &Self) -> Option<Self> {
let x_mn = cmp::max(self.x_mn, other.x_mn);
let x_mx = cmp::min(self.x_mx, other.x_mx);
let y_mn = cmp::max(self.y_mn, other.y_mn);
let y_mx = cmp::min(self.y_mx, other.y_mx);
let z_mn = cmp::max(self.z_mn, other.z_mn);
let z_mx = cmp::min(self.z_mx, other.z_mx);
if x_mx >= x_mn && y_mx >= y_mn && z_mx >= z_mn {
Some(Self {
on: self.on,
x_mn,
x_mx,
y_mn,
y_mx,
z_mn,
z_mx,
})
} else {
None
}
}
/// count the elements in this cuboid
pub fn count(&self) -> i64 {
(self.x_mx - self.x_mn + 1) * (self.y_mx - self.y_mn + 1) * (self.z_mx - self.z_mn + 1)
}
/// get count restricted to self recursively as follows
///
/// ``count(cuboids[k] in self) = count(cuboids[k] v self) - sum_i=1^k-1 count(cuboids[i] in (cuboids[k] v self))``
pub fn get_count_in(&self, cuboids: &[Self]) -> i64 {
if let Some(other) = cuboids.last() {
if let Some(i) = other.intersect(self) {
let mut count = if i.on { i.count() } else { 0 };
for k in 1..cuboids.len() {
count -= i.get_count_in(&cuboids[..k]);
}
return count;
}
}
0
}
}
Instead of a recursive function, it is also possible to create an explicit stack which is processed in a while loop. This solution requires fewer code and is slightly faster for me:
#[cfg(not(feature = "recursion"))]
pub fn get_on_count(cuboids: &[Cuboid]) -> u64 {
let mut count = 0;
let mut stack: Vec<_> = (0..cuboids.len()).map(|k| (cuboids[k], k, 1)).collect();
while let Some((c, k, sign)) = stack.pop() {
count += (c.on as i64) * sign * c.count();
stack.extend((0..k).filter_map(|k| cuboids[k].intersect(&c).map(|c| (c, k, -sign))));
}
count as u64
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT_PARSE: &str = "on x=10..12,y=10..12,z=10..12
on x=11..13,y=11..13,z=11..13
off x=9..11,y=9..11,z=9..11
on x=10..10,y=10..10,z=10..10";
const CUBOIDS_PARSE: &[Cuboid] = &[
Cuboid {
on: true,
x_mn: 10,
x_mx: 12,
y_mn: 10,
y_mx: 12,
z_mn: 10,
z_mx: 12,
},
Cuboid {
on: true,
x_mn: 11,
x_mx: 13,
y_mn: 11,
y_mx: 13,
z_mn: 11,
z_mx: 13,
},
Cuboid {
on: false,
x_mn: 9,
x_mx: 11,
y_mn: 9,
y_mx: 11,
z_mn: 9,
z_mx: 11,
},
Cuboid {
on: true,
x_mn: 10,
x_mx: 10,
y_mn: 10,
y_mx: 10,
z_mn: 10,
z_mx: 10,
},
];
const CONTENT_1: &str = "on x=-20..26,y=-36..17,z=-47..7
on x=-20..33,y=-21..23,z=-26..28
on x=-22..28,y=-29..23,z=-38..16
on x=-46..7,y=-6..46,z=-50..-1
on x=-49..1,y=-3..46,z=-24..28
on x=2..47,y=-22..22,z=-23..27
on x=-27..23,y=-28..26,z=-21..29
on x=-39..5,y=-6..47,z=-3..44
on x=-30..21,y=-8..43,z=-13..34
on x=-22..26,y=-27..20,z=-29..19
off x=-48..-32,y=26..41,z=-47..-37
on x=-12..35,y=6..50,z=-50..-2
off x=-48..-32,y=-32..-16,z=-15..-5
on x=-18..26,y=-33..15,z=-7..46
off x=-40..-22,y=-38..-28,z=23..41
on x=-16..35,y=-41..10,z=-47..6
off x=-32..-23,y=11..30,z=-14..3
on x=-49..-5,y=-3..45,z=-29..18
off x=18..30,y=-20..-8,z=-3..13
on x=-41..9,y=-7..43,z=-33..15
on x=-54112..-39298,y=-85059..-49293,z=-27449..7877
on x=967..23432,y=45373..81175,z=27513..53682";
const CONTENT_2: &str = "on x=-5..47,y=-31..22,z=-19..33
on x=-44..5,y=-27..21,z=-14..35
on x=-49..-1,y=-11..42,z=-10..38
on x=-20..34,y=-40..6,z=-44..1
off x=26..39,y=40..50,z=-2..11
on x=-41..5,y=-41..6,z=-36..8
off x=-43..-33,y=-45..-28,z=7..25
on x=-33..15,y=-32..19,z=-34..11
off x=35..47,y=-46..-34,z=-11..5
on x=-14..36,y=-6..44,z=-16..29
on x=-57795..-6158,y=29564..72030,z=20435..90618
on x=36731..105352,y=-21140..28532,z=16094..90401
on x=30999..107136,y=-53464..15513,z=8553..71215
on x=13528..83982,y=-99403..-27377,z=-24141..23996
on x=-72682..-12347,y=18159..111354,z=7391..80950
on x=-1060..80757,y=-65301..-20884,z=-103788..-16709
on x=-83015..-9461,y=-72160..-8347,z=-81239..-26856
on x=-52752..22273,y=-49450..9096,z=54442..119054
on x=-29982..40483,y=-108474..-28371,z=-24328..38471
on x=-4958..62750,y=40422..118853,z=-7672..65583
on x=55694..108686,y=-43367..46958,z=-26781..48729
on x=-98497..-18186,y=-63569..3412,z=1232..88485
on x=-726..56291,y=-62629..13224,z=18033..85226
on x=-110886..-34664,y=-81338..-8658,z=8914..63723
on x=-55829..24974,y=-16897..54165,z=-121762..-28058
on x=-65152..-11147,y=22489..91432,z=-58782..1780
on x=-120100..-32970,y=-46592..27473,z=-11695..61039
on x=-18631..37533,y=-124565..-50804,z=-35667..28308
on x=-57817..18248,y=49321..117703,z=5745..55881
on x=14781..98692,y=-1341..70827,z=15753..70151
on x=-34419..55919,y=-19626..40991,z=39015..114138
on x=-60785..11593,y=-56135..2999,z=-95368..-26915
on x=-32178..58085,y=17647..101866,z=-91405..-8878
on x=-53655..12091,y=50097..105568,z=-75335..-4862
on x=-111166..-40997,y=-71714..2688,z=5609..50954
on x=-16602..70118,y=-98693..-44401,z=5197..76897
on x=16383..101554,y=4615..83635,z=-44907..18747
off x=-95822..-15171,y=-19987..48940,z=10804..104439
on x=-89813..-14614,y=16069..88491,z=-3297..45228
on x=41075..99376,y=-20427..49978,z=-52012..13762
on x=-21330..50085,y=-17944..62733,z=-112280..-30197
on x=-16478..35915,y=36008..118594,z=-7885..47086
off x=-98156..-27851,y=-49952..43171,z=-99005..-8456
off x=2032..69770,y=-71013..4824,z=7471..94418
on x=43670..120875,y=-42068..12382,z=-24787..38892
off x=37514..111226,y=-45862..25743,z=-16714..54663
off x=25699..97951,y=-30668..59918,z=-15349..69697
off x=-44271..17935,y=-9516..60759,z=49131..112598
on x=-61695..-5813,y=40978..94975,z=8655..80240
off x=-101086..-9439,y=-7088..67543,z=33935..83858
off x=18020..114017,y=-48931..32606,z=21474..89843
off x=-77139..10506,y=-89994..-18797,z=-80..59318
off x=8476..79288,y=-75520..11602,z=-96624..-24783
on x=-47488..-1262,y=24338..100707,z=16292..72967
off x=-84341..13987,y=2429..92914,z=-90671..-1318
off x=-37810..49457,y=-71013..-7894,z=-105357..-13188
off x=-27365..46395,y=31009..98017,z=15428..76570
off x=-70369..-16548,y=22648..78696,z=-1892..86821
on x=-53470..21291,y=-120233..-33476,z=-44150..38147
off x=-93533..-4276,y=-16170..68771,z=-104985..-24507";
#[test]
fn test_parse() {
assert_eq!(CUBOIDS_PARSE, parse(CONTENT_PARSE));
}
#[test]
fn test_soluton_1_simple() {
assert_eq!(27, solution_1(&CUBOIDS_PARSE[..1]));
assert_eq!(27 + 19, solution_1(&CUBOIDS_PARSE[..2]));
assert_eq!(27 + 19 - 8, solution_1(&CUBOIDS_PARSE[..3]));
assert_eq!(27 + 19 - 8 + 1, solution_1(&CUBOIDS_PARSE[..4]));
}
#[test]
fn test_solution_1() {
assert_eq!(590_784, solution_1(&parse(CONTENT_1)));
}
#[test]
fn test_solution_2_simple() {
assert_eq!(27, solution_2(&CUBOIDS_PARSE[..1]));
assert_eq!(27 + 19, solution_2(&CUBOIDS_PARSE[..2]));
assert_eq!(27 + 19 - 8, solution_2(&CUBOIDS_PARSE[..3]));
assert_eq!(27 + 19 - 8 + 1, solution_2(&CUBOIDS_PARSE[..4]));
}
#[test]
fn test_solution_1_2_compare() {
let cuboids = parse(CONTENT_1)
.iter()
.filter_map(|c| c.intersect(&Cuboid::SMALL_WORLD))
.collect::<Vec<_>>();
for k in 1..cuboids.len() {
println!("\n===\n{:3}\n===", k);
assert_eq!(
solution_1(&cuboids[0..k]),
solution_2(&cuboids[0..k]),
"Failed at step {}",
k
);
}
}
#[test]
fn test_solution_2() {
assert_eq!(2_758_514_936_282_235, solution_2(&parse(CONTENT_2)));
}
}
Rust solution to AoC|2021|23.
My initial idea was to use Dijkstra. The first challenge was to come up with a way to model the data. After some tries, I decided to model the burrows as a list of Option<u8>
s. The first 11 entries model the hallway. The remaining elements model the rooms one after the other. A None
value represents open space, a Some(p)
value represents a space occupied by amphipod, A
for p = 0
to D
for p = 3
.
The next challenge is to find all adjacent states from a given state. To solve this, I use a constant MAP
listing all adjacent positions for every position. To find the adjacents, I do a depth first search using this map starting from every position that contains an amphipod. There is a lot of potential for bugs in implementing the rules from the puzzle description that describe whether a position is valid for a given amphipod.
After all bugs fixed, I had a solution which solved part 1 in ~15 seconds and did never finish for part 2.
I thought that the key to also solve part 2 in reasonable time and with reasonable memory requirements was to switch from Dijkstra to A*. As an heuristic for the remaining cost, I add all the energies required to move to the target position neglecting all amphipods in the way. For amphipods that already sit in their target room but have amphipods of other types below them, I add the cost for moving out of the room and in again. I figured out that A* might help a lot when manually solving part 1 and realizing that the actual cost was only very little above that lower bound.
Adding the A* heuristic reduced the solution time to <1s for both parts together.
After my A* solution worked, I switched it off again and it turned out that the solution still runs in slightly above 1s for part 2. I cannot figure out, what my mistake really was. Implementing A* obviously helped to fix it…
As an additional optimization, I perform some simple deadlock checks (amphipods that cannot exchange position in the hallway and amphipods that cannot reach their room from the left-most or right-most rooms) prior to adding adjacents to the queue. This saves ~40% runtime, mainly for part 2.
I made the use of the A* heuristic and the deadlock check configurable via features a-star
and deadlock-check
which are both active as default. It turns out that A* does not reduce runtime a lot for part 2 but helps a lot for part 1 while it is the other way around for deadlock checks.
The actual search is performed in a solve method, which mostly implements the logic to find adjacents
pub fn solve<B>(start: B, trace_path: bool) -> usize
where
B: Burrow,
{
let mut search = Search::init(start, B::get_min_cost);
while let Some((idx, burrow)) = search.pop() {
if B::TARGET.eq(burrow.as_ref()) {
// target reached
if trace_path {
search.print_path_to(idx);
}
return search.cost(idx);
}
for k in 0..B::LEN {
if let Some(p) = burrow[k] {
let cur_room = B::get_room(k);
if cur_room == Some(p)
&& (k + 1..B::get_room_mx(p)).all(|k_r| burrow[k_r] == Some(p))
{
// position is settled in room (below are only the pods of the same type)
continue;
}
let mut done = vec![false; B::LEN];
let mut stack = Vec::new();
done[k] = true;
for k_adj in B::MAP[k] {
done[*k_adj] = true;
stack.push((1, *k_adj));
}
let mut push_later = Vec::new();
let mut do_push = true;
while let Some((steps, k_adj)) = stack.pop() {
if burrow[k_adj].is_some() {
// cannot move on top of other
continue;
}
let adj_room = B::get_room(k_adj);
if adj_room == Some(p)
&& (k_adj + 1..B::get_room_mx(p)).all(|k_r| burrow[k_r] == Some(p))
{
// move in own room
let weight = steps * B::ENERGIES[p as usize];
let mut adjacent = burrow.as_ref().clone();
adjacent.move_pod(k, k_adj);
search.push(idx, adjacent, weight);
// no need to continue to look for other adjacents
do_push = false;
break;
}
if k_adj < B::HALLWAY_LEN && !B::is_door(k_adj) && cur_room.is_some() {
// move from room to hallway
let weight = steps * B::ENERGIES[p as usize];
let mut adjacent = burrow.as_ref().clone();
adjacent.move_pod(k, k_adj);
if !adjacent.is_deadlock() {
push_later.push((idx, adjacent, weight));
}
// continue search, other positions in the hallway may be valid adjacents
}
for k_adj_2 in B::MAP[k_adj] {
if !done[*k_adj_2] {
done[*k_adj_2] = true;
stack.push((steps + 1, *k_adj_2));
}
}
}
if do_push {
for (parent_idx, adjacent, weight) in push_later {
search.push(parent_idx, adjacent, weight);
}
}
}
}
}
panic!("No path found");
}
I wanted to have a solution that works for both, part 1 and part 2.
Therefore, I implemented a trait Burrow
and a trait BurrowCommon
to be able to create blanket implementations for common behavior or properties. The two structs implementing the trait Burrow
are BurrowSmall
for part 1 and BurrowLarge
for part 2.
#[derive(PartialEq, Eq, Clone, PartialOrd, Ord, Hash)]
pub struct BurrowLarge {
data: [Option<u8>; 11 + 4 * 4],
}
#[derive(PartialEq, Eq, Clone, PartialOrd, Ord, Hash)]
pub struct BurrowSmall {
data: [Option<u8>; 11 + 4 * 2],
}
pub trait Burrow:
std::ops::Index<usize, Output = Option<u8>>
+ std::ops::IndexMut<usize>
+ Eq
+ Ord
+ std::hash::Hash
+ Clone
+ std::fmt::Debug
{
const ROOM_LEN: usize;
const LEN: usize;
const MAP: &'static [&'static [usize]];
const MIN_COST: &'static [[usize; 4]];
const TARGET: Self;
}
pub trait BurrowCommon {
const HALLWAY_LEN: usize;
const ENERGIES: [usize; 4];
fn format(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result;
fn is_door(idx: usize) -> bool;
fn get_room(idx: usize) -> Option<u8>;
fn get_room_mn(room: u8) -> usize;
fn get_room_mx(room: u8) -> usize;
fn move_pod(&mut self, idx_from: usize, idx_to: usize);
fn get_min_cost(&self) -> usize;
fn is_deadlock(&self) -> bool;
}
impl<B> BurrowCommon for B
where
B: Burrow,
{
const HALLWAY_LEN: usize = 11;
const ENERGIES: [usize; 4] = [1, 10, 100, 1000];
fn format(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "#############\n")?;
write!(f, "#")?;
for k in 0..Self::HALLWAY_LEN {
match self[k] {
Some(0) => write!(f, "A")?,
Some(1) => write!(f, "B")?,
Some(2) => write!(f, "C")?,
Some(3) => write!(f, "D")?,
_ => write!(f, ".")?,
};
}
write!(f, "#\n")?;
for row in 0..Self::ROOM_LEN {
if row == 0 {
write!(f, "###")?;
} else {
write!(f, " #")?;
}
for room in 0..4 {
match self[Self::HALLWAY_LEN + Self::ROOM_LEN * room + row] {
Some(0) => write!(f, "A#")?,
Some(1) => write!(f, "B#")?,
Some(2) => write!(f, "C#")?,
Some(3) => write!(f, "D#")?,
_ => write!(f, ".#")?,
};
}
if row == 0 {
write!(f, "##\n")?;
} else {
write!(f, "\n")?;
}
}
write!(f, " #########")
}
fn is_door(idx: usize) -> bool {
idx == 2 || idx == 4 || idx == 6 || idx == 8
}
fn get_room(idx: usize) -> Option<u8> {
if idx >= Self::HALLWAY_LEN {
Some(((idx - Self::HALLWAY_LEN) / Self::ROOM_LEN) as u8)
} else {
None
}
}
fn get_room_mn(room: u8) -> usize {
Self::HALLWAY_LEN + room as usize * Self::ROOM_LEN
}
fn get_room_mx(room: u8) -> usize {
Self::HALLWAY_LEN + (room as usize + 1) * Self::ROOM_LEN
}
fn move_pod(&mut self, idx_from: usize, idx_to: usize) {
assert_eq!(None, self[idx_to]);
self[idx_to] = self[idx_from];
self[idx_from] = None;
}
#[cfg(feature = "a-star")]
fn get_min_cost(&self) -> usize {
let mut cost = 0;
for k in 0..Self::LEN {
if let Some(p) = self[k] {
// lookup minimum cost from table
cost += Self::MIN_COST[k][p as usize];
if let Some(r) = Self::get_room(k) {
if p == r
&& (k + 1..Self::get_room_mx(r))
.any(|k| self[k].map(|p2| p2 != p).unwrap_or(false))
{
// foreigners below, need to
// - move out (+1) and
// - move one to the side (+1)
// - and back again (*2)
let adder = 2
* (k - Self::get_room_mn(r) as usize + 2)
* Self::ENERGIES[p as usize];
cost += adder;
}
}
}
}
for e in Self::ENERGIES {
cost -= Self::ROOM_LEN * (Self::ROOM_LEN - 1) / 2 * e;
}
cost
}
#[cfg(not(feature = "a-star"))]
fn get_min_cost(&self) -> usize {
0
}
#[cfg(feature = "deadlock-check")]
fn is_deadlock(&self) -> bool {
// pods in hallway which cannot exchange positions
if self[3] == Some(3) && self[5] == Some(0)
|| self[3] == Some(3) && self[7] == Some(0)
|| self[5] == Some(3) && self[7] == Some(0)
|| self[5] == Some(3) && self[7] == Some(1)
|| self[3] == Some(2) && self[5] == Some(0)
{
return true;
}
// pods in left most room which cannot reach any other room
if self[1].is_some()
&& self[3] == Some(0)
&& (Self::get_room_mn(0)..Self::get_room_mx(0))
.filter_map(|k| self[k])
.max()
.map_or(false, |mx| mx > 0)
{
return true;
}
// pods in rightmost room which cannto reach any other room
if self[9].is_some()
&& self[7] == Some(3)
&& (Self::get_room_mn(3)..Self::get_room_mx(3))
.filter_map(|k| self[k])
.min()
.map_or(false, |mn| mn < 3)
{
return true;
}
false
}
#[cfg(not(feature = "deadlock-check"))]
fn is_deadlock(&self) -> bool {
false
}
}
impl Burrow for BurrowLarge {
const TARGET: Self = Self {
data: [
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
Some(0),
Some(0),
Some(0),
Some(0),
Some(1),
Some(1),
Some(1),
Some(1),
Some(2),
Some(2),
Some(2),
Some(2),
Some(3),
Some(3),
Some(3),
Some(3),
],
};
const ROOM_LEN: usize = 4;
const LEN: usize = 11 + 4 * 4;
const MAP: &'static [&'static [usize]] = &[
&[1],
&[0, 2],
&[1, 3, 11],
&[2, 4],
&[3, 5, 15],
&[4, 6],
&[5, 7, 19],
&[6, 8],
&[7, 9, 23],
&[8, 10],
&[9],
// room 1
&[2, 12],
&[11, 13],
&[12, 14],
&[13],
// room 2
&[4, 16],
&[15, 17],
&[16, 18],
&[17],
// room 3
&[6, 20],
&[19, 21],
&[20, 22],
&[21],
// room 4
&[8, 24],
&[23, 25],
&[24, 26],
&[25],
];
const MIN_COST: &'static [[usize; 4]] = &[
// hallway
[6, 80, 1000, 12000],
[5, 70, 900, 11000],
[4, 60, 800, 10000],
[5, 50, 700, 9000],
[6, 40, 600, 8000],
[7, 50, 500, 7000],
[8, 60, 400, 6000],
[9, 70, 500, 5000],
[10, 80, 600, 4000],
[11, 90, 700, 5000],
[12, 100, 800, 6000],
// room 1
[3, 70, 900, 11000],
[2, 80, 1000, 12000],
[1, 90, 1100, 13000],
[0, 100, 1200, 14000],
// room 2
[7, 30, 700, 9000],
[8, 20, 800, 10000],
[9, 10, 900, 11000],
[10, 0, 1000, 12000],
// room 3
[9, 70, 300, 7000],
[10, 80, 200, 8000],
[11, 90, 100, 9000],
[12, 100, 0, 10000],
// room 4
[11, 90, 700, 3000],
[12, 100, 800, 2000],
[13, 110, 900, 1000],
[14, 120, 1000, 0],
];
}
impl Burrow for BurrowSmall {
const TARGET: Self = Self {
data: [
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
Some(0),
Some(0),
Some(1),
Some(1),
Some(2),
Some(2),
Some(3),
Some(3),
],
};
const ROOM_LEN: usize = 2;
const LEN: usize = 11 + 4 * 2;
const MAP: &'static [&'static [usize]] = &[
&[1],
&[0, 2],
&[1, 3, 11],
&[2, 4],
&[3, 5, 13],
&[4, 6],
&[5, 7, 15],
&[6, 8],
&[7, 9, 17],
&[8, 10],
&[9],
// room 1
&[2, 12],
&[11],
// room 2
&[4, 14],
&[13],
// room 3
&[6, 16],
&[15],
// room 4
&[8, 18],
&[17],
];
const MIN_COST: &'static [[usize; 4]] = &[
// hallway
[4, 60, 800, 10000],
[3, 50, 700, 9000],
[2, 40, 600, 8000],
[3, 30, 500, 7000],
[4, 20, 400, 6000],
[5, 30, 300, 5000],
[6, 40, 200, 4000],
[7, 50, 300, 3000],
[8, 60, 400, 2000],
[9, 70, 500, 3000],
[10, 80, 600, 4000],
// room 1
[1, 50, 700, 9000],
[0, 60, 800, 10000],
// room 2
[5, 10, 500, 7000],
[6, 0, 600, 8000],
// room 3
[7, 50, 100, 5000],
[8, 60, 0, 6000],
// room 4
[9, 70, 500, 1000],
[10, 80, 600, 0],
];
}
impl std::fmt::Debug for BurrowSmall {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.format(f)
}
}
impl std::fmt::Display for BurrowSmall {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.format(f)
}
}
impl std::ops::Index<usize> for BurrowSmall {
type Output = Option<u8>;
fn index(&self, index: usize) -> &Self::Output {
&self.data[index]
}
}
impl std::ops::IndexMut<usize> for BurrowSmall {
fn index_mut(&mut self, index: usize) -> &mut Self::Output {
&mut self.data[index]
}
}
impl std::fmt::Debug for BurrowLarge {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.format(f)
}
}
impl std::fmt::Display for BurrowLarge {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.format(f)
}
}
impl std::ops::Index<usize> for BurrowLarge {
type Output = Option<u8>;
fn index(&self, index: usize) -> &Self::Output {
&self.data[index]
}
}
impl std::ops::IndexMut<usize> for BurrowLarge {
fn index_mut(&mut self, index: usize) -> &mut Self::Output {
&mut self.data[index]
}
}
impl BurrowSmall {
pub fn parse(content: &str) -> Self {
let mut lines = content.lines().skip(1);
let mut data = [None; Self::LEN];
for (k, c) in lines
.next()
.expect("No hallway")
.chars()
.skip(1)
.enumerate()
{
data[k] = match c {
'A' => Some(0),
'B' => Some(1),
'C' => Some(2),
'D' => Some(3),
'.' => None,
'#' => break,
_ => panic!("Unexpected char: {}", c),
}
}
for row in 0..Self::ROOM_LEN {
let mut i = 0;
for (_, c) in lines
.next()
.expect("No hallway")
.chars()
.skip(1)
.enumerate()
.filter(|(k, _)| Self::is_door(*k))
{
let pod = match c {
'A' => Some(0),
'B' => Some(1),
'C' => Some(2),
'D' => Some(3),
'.' => None,
_ => panic!("Unexpected character: {}", c),
};
data[Self::HALLWAY_LEN + i * Self::ROOM_LEN + row] = pod;
i += 1;
}
}
Self { data }
}
}
impl BurrowLarge {
pub fn from(burrow: &BurrowSmall) -> Self {
let mut data = [None; Self::LEN];
for k in 0..Self::HALLWAY_LEN {
data[k] = burrow.data[k];
}
for k in 0..4 {
data[Self::HALLWAY_LEN + k * Self::ROOM_LEN + 0] =
burrow.data[BurrowSmall::HALLWAY_LEN + k * BurrowSmall::ROOM_LEN + 0];
data[Self::HALLWAY_LEN + k * Self::ROOM_LEN + 3] =
burrow.data[BurrowSmall::HALLWAY_LEN + k * BurrowSmall::ROOM_LEN + 1];
}
data[Self::HALLWAY_LEN + 0 * Self::ROOM_LEN + 1] = Some(3);
data[Self::HALLWAY_LEN + 0 * Self::ROOM_LEN + 2] = Some(3);
data[Self::HALLWAY_LEN + 1 * Self::ROOM_LEN + 1] = Some(2);
data[Self::HALLWAY_LEN + 1 * Self::ROOM_LEN + 2] = Some(1);
data[Self::HALLWAY_LEN + 2 * Self::ROOM_LEN + 1] = Some(1);
data[Self::HALLWAY_LEN + 2 * Self::ROOM_LEN + 2] = Some(0);
data[Self::HALLWAY_LEN + 3 * Self::ROOM_LEN + 1] = Some(0);
data[Self::HALLWAY_LEN + 3 * Self::ROOM_LEN + 2] = Some(2);
Self { data }
}
}
The search state is modeled as a separate struct. It contains a list of all nodes as tuples ((cost bound, cost), parent node index wrapped in an Option
, settled flag, burrow). To find a node for a given burrow, a map with burrows as keys and indices to the list as values is maintained. A binary heap is used for the graph traversal. Since BinaryHeap
is a max heap, the costs are inverted (because I use unsigned types, this can be done with a bitwise not). I do not use decrease key when pushing to the heap but instead discard items popped from the heap if the corresponding node is already settled.
pub struct Search<T> {
/// ((bound, cost), parent index, settled, item
nodes: Vec<((usize, usize), Option<usize>, bool, Rc<T>)>,
heap: BinaryHeap<((usize, usize), usize)>,
map: HashMap<Rc<T>, usize>,
heuristic: fn(&T) -> usize,
}
impl<T> Search<T>
where
T: Eq + std::hash::Hash + std::fmt::Debug,
{
pub fn init(start: T, heuristic: fn(&T) -> usize) -> Self {
let mut search = Self {
nodes: Vec::new(),
heap: BinaryHeap::new(),
map: HashMap::new(),
heuristic,
};
let start = Rc::new(start);
search
.nodes
.push(((!0, !0), None, false, Rc::clone(&start)));
search.heap.push(((!0, !0), 0));
search.map.insert(start, 0);
search
}
pub fn cost(&self, idx: usize) -> usize {
!self.nodes[idx].0 .1
}
pub fn pop(&mut self) -> Option<(usize, Rc<T>)> {
while let Some((_, idx)) = self.heap.pop() {
if !self.nodes[idx].2 {
self.nodes[idx].2 = true;
return Some((idx, Rc::clone(&self.nodes[idx].3)));
}
}
None
}
pub fn push(&mut self, parent_idx: usize, adjacent: T, weight: usize) -> bool {
let adjacent = Rc::new(adjacent);
let idx = *self.map.entry(Rc::clone(&adjacent)).or_insert_with(|| {
self.nodes.push(((0, 0), None, false, adjacent));
self.nodes.len() - 1
});
if self.nodes[idx].2 {
return false;
}
let cost = self.cost(parent_idx) + weight;
let comp_cost = (!(cost + (self.heuristic)(&self.nodes[idx].3)), !cost);
if comp_cost > self.nodes[idx].0 {
self.nodes[idx].0 = comp_cost;
self.nodes[idx].1 = Some(parent_idx);
self.heap.push((comp_cost, idx));
return true;
}
false
}
pub fn get_path_to(&self, target: usize) -> VecDeque<(usize, usize, Rc<T>)> {
let mut path = VecDeque::new();
let mut current = target;
loop {
let ((bound, cost), parent, _, node) = &self.nodes[current];
path.push_front((!bound, !cost, Rc::clone(node)));
if let Some(parent) = parent {
current = *parent;
} else {
break;
}
}
path
}
pub fn print_path_to(&self, target: usize) {
let mut steps = 0;
for (bound, cost, item) in self.get_path_to(target) {
if steps == 0 {
println!("\nInitial\n{:?}", item.as_ref());
} else {
println!("\n{}) {} (bound: {}) ==>\n{:?}", steps, cost, bound, item.as_ref());
}
steps += 1;
}
}
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT_1: &str = "#############
#...........#
###B#C#B#D###
#A#D#C#A#
#########";
const BURROW_1: BurrowSmall = BurrowSmall {
data: [
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
Some(1),
Some(0),
Some(2),
Some(3),
Some(1),
Some(2),
Some(3),
Some(0),
],
};
const BURROW_2: BurrowLarge = BurrowLarge {
data: [
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
Some(1),
Some(3),
Some(3),
Some(0),
Some(2),
Some(2),
Some(1),
Some(3),
Some(1),
Some(1),
Some(0),
Some(2),
Some(3),
Some(0),
Some(2),
Some(0),
],
};
#[test]
fn test_parse() {
let burrow = parse(CONTENT_1);
assert_eq!(BURROW_1, burrow);
println!("{:?}", burrow);
}
const CONTENT_2: &str = "#############
#...B.......#
###B#C#.#D###
#A#D#C#A#
#########";
const CONTENT_3: &str = "#############
#...B.......#
###B#.#C#D###
#A#D#C#A#
#########";
const CONTENT_4: &str = "#############
#.....D.....#
###B#.#C#D###
#A#B#C#A#
#########";
const CONTENT_5: &str = "#############
#.....D.....#
###.#B#C#D###
#A#B#C#A#
#########";
const CONTENT_6: &str = "#############
#.....D.D.A.#
###.#B#C#.###
#A#B#C#.#
#########";
const CONTENT_7: &str = "#############
#.........A.#
###.#B#C#D###
#A#B#C#D#
#########";
const CONTENT_8: &str = "#############
#...........#
###A#B#C#D###
#A#B#C#D#
#########";
#[test]
fn test_solution_1() {
let exp_costs = [
40 + 400 + 3030 + 40 + 2003 + 7000 + 8,
400 + 3030 + 40 + 2003 + 7000 + 8,
3030 + 40 + 2003 + 7000 + 8,
40 + 2003 + 7000 + 8,
2003 + 7000 + 8,
7000 + 8,
8,
0,
];
let burrows = [
BURROW_1,
parse(CONTENT_2),
parse(CONTENT_3),
parse(CONTENT_4),
parse(CONTENT_5),
parse(CONTENT_6),
parse(CONTENT_7),
parse(CONTENT_8),
];
for k in (0..1).rev() {
let act_cost = solution_1(&burrows[k]);
assert_eq!(exp_costs[k], act_cost);
}
}
#[test]
fn test_solution_2() {
assert_eq!(44169, solution_2(&BURROW_1));
}
#[test]
fn test_burrow_large_from() {
assert_eq!(BURROW_2, BurrowLarge::from(&BURROW_1));
}
#[test]
fn test_get_min_cost() {
let burrow = BurrowLarge::from(&parse(
"#############
#...........#
###A#D#C#A###
#C#D#B#B#
#########",
));
println!("{:?}", burrow);
assert_eq!(43365, burrow.get_min_cost());
assert_eq!(0, BurrowLarge::TARGET.get_min_cost());
}
}
A* can be very helpful compared to plain Dijkstra! But it is even more helpful to implemenent the basic Dijkstra correctly.
I think this was the first time I created what Rust calls "blanket implementations".
Rust solution to AoC|2021|24.
I don’t get the idea of today’s puzzle. I essentially solved it manually. From my point of view, a well posed problem allows to implement a solution based on the description only without considering the specific puzzle input and then run that solution on the specific input. For today’s problem, I don’t think there is a way to find a solution without analyzing the puzzle input.
I reverse engineered the MONAD (puzzle input) to figure out what actually happens. The content in variable z can essentially be interpreted as a stack of input digits with offsets. To add stack elements, the current content is multiplied with 26 (in case of my puzzle input) and than the next element is added. This constrains an element to be in the range 0 to 25 (inclusive). Since input digits are in the range 0 to 9 (inclusive), offsets shall be in the range 0 to 16 (inclusive).
The MONAD consists of 14 similar blocks of instructions (one for every input digit), which evaluate a condition and based on the outcome do or do not push a new element on the stack (the z variable).
In seven of the blocks, the condition evaluates to true independently of the input digit. In the remaining seven blocks, a new element is pushed to the stack if dX != dY + o
. For the stack to be empty at the end and thus the variable z
to equal zero, elements shall not be pushed to the stack, i.e., dX = dY + o
, where dX
and dY
are input digits (digitis of the serial number to be checked) and o >= 0
is an offset.
For the equalities to be satisfied, dY >= 1
and dY <= 9 - o
need to be satisfied. The first constraint yields the smallest valid serial number (dY = 1
with corresponding dX = 1 + o
), the second constraint yields the largest valid serial number (dY = 9 - o
with corresponding dX = 9
).
After I had the solution, I put into code what I did manually (so it might work for other inputs assuming they differ in the offsets only).
pub const DIGIT_MIN: Val = 1;
pub const DIGIT_MAX: Val = 9;
pub const INPUT_LEN: usize = 14;
pub fn get_constraints(instructions: &[Inst]) -> Vec<(usize, usize, Val)> {
let block_starts = instructions
.iter()
.enumerate()
.filter(|(_, i)| i == &&Inst::Inp(Var::W))
.map(|(k, _)| k)
.collect::<Vec<_>>();
assert_eq!(INPUT_LEN, block_starts.len());
// determine module used to build stack
let module = instructions
.iter()
.filter_map(|i| {
if let Inst::Mod(Var::X, Op::Val(m)) = i {
Some(*m)
} else {
None
}
})
.next()
.expect("No module");
// stack of elements (n, o) where n is an input number and o an offset
// the actual value of the z variable will be
// stack.iter().fold(0, |z, (n, o)| z * module + input[n] + o);
let mut stack: Vec<(usize, Val)> = Vec::with_capacity(INPUT_LEN);
// constraints holds elements (n1, n2, o) and represents a constraint
// input[n1] == input[n2] + o, which needs to be satisfied for a valid serial number
let mut constraints = Vec::with_capacity(INPUT_LEN / 2);
// iterate over blocks
for (n1, k) in block_starts.iter().enumerate() {
// take last element of stack
// a single 'div z a' instruction is expected per block where a is either
// the module or 1. In case of a == module, the last element from z is removed
let element = if instructions[*k..]
.iter()
.filter_map(|i| {
if let Inst::Div(Var::Z, Op::Val(div)) = i {
Some(*div)
} else {
None
}
})
.next()
.expect("No div")
== module
{
// z is divided by 26, pop element
stack.pop()
} else {
// z is divided by 1, peek element
stack.last().map(|e| *e)
};
// a single instruction 'add x a' where a is a literal value is expected in a block
// the value a is used to build the condition
let off = instructions[*k..]
.iter()
.filter_map(|i| {
if let Inst::Add(Var::X, Op::Val(o)) = i {
Some(*o)
} else {
None
}
})
.next()
.expect("No check offset");
// the actual condition offset is the offset from the last element from stack + the
// condition offset
let off = element.map(|(_, o)| off + o).unwrap_or(off);
if off > -DIGIT_MAX && off < DIGIT_MAX {
// if the offset is greater than -9 and less than 9, this results in a constraint comparing
// two inputs input[n] = input[n1] + off. This is only valid, if there was a last element on
// stack
let (n2, _) = element.expect("Condition but no input digit to compare");
// normalize constraint to positive offsets
constraints.push(if off > 0 {
(n1, n2, off)
} else {
(n2, n1, -off)
});
// if constraint is satisfied, no additional element will be added to stack
} else {
// the condition check outcome is independent of the input values, add next element to stack
// there will be one instruction 'add y w' followed by an instruction 'add y a' where
// a is a literal value representing an offset
let k_add = instructions[*k..]
.iter()
.position(|i| i == &Inst::Add(Var::Y, Op::Var(Var::W)))
.expect("input is not copied to y");
if let Inst::Add(Var::Y, Op::Val(off)) = instructions[*k + k_add + 1] {
assert!(off >= 0 && off < module - DIGIT_MAX, "Input does not fit");
stack.push((n1, off));
} else {
panic!("No offset added to digit {}", n1);
}
}
}
// expect 7 constraints and stack empty (z equal to 0)
assert_eq!(INPUT_LEN / 2, constraints.len());
assert!(stack.is_empty());
constraints
}
pub fn solution_1(instructions: &[Inst]) -> u64 {
let inputs = &mut [0; INPUT_LEN];
for (n1, n2, o) in get_constraints(instructions) {
inputs[n1] = DIGIT_MAX;
inputs[n2] = DIGIT_MAX - o;
}
assert_eq!(0, execute(instructions, inputs));
inputs.iter().fold(0, |v, d| 10 * v + *d as u64)
}
pub fn solution_2(instructions: &[Inst]) -> u64 {
let inputs = &mut [0; INPUT_LEN];
for (n1, n2, o) in get_constraints(instructions) {
inputs[n1] = DIGIT_MIN + o;
inputs[n2] = DIGIT_MIN;
}
assert_eq!(0, execute(instructions, inputs));
inputs.iter().fold(0, |v, d| 10 * v + *d as u64)
}
I wanted to learn something today, so I implemented Instructions using structs, traits and enums. This allows to have all range checks done at compile time.
/// Value type
pub type Val = i32;
/// Memory of variables
pub type Memory = [Val; 4];
/// Variable
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum Var {
W,
X,
Y,
Z,
}
impl std::ops::Index<Var> for Memory {
type Output = Val;
fn index(&self, index: Var) -> &Self::Output {
match index {
Var::W => &self[0],
Var::X => &self[1],
Var::Y => &self[2],
Var::Z => &self[3],
}
}
}
impl std::ops::IndexMut<Var> for Memory {
fn index_mut(&mut self, index: Var) -> &mut Self::Output {
match index {
Var::W => &mut self[0],
Var::X => &mut self[1],
Var::Y => &mut self[2],
Var::Z => &mut self[3],
}
}
}
impl std::str::FromStr for Var {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"w" => Ok(Var::W),
"x" => Ok(Var::X),
"y" => Ok(Var::Y),
"z" => Ok(Var::Z),
_ => Err(()),
}
}
}
/// Second operand (Variable or Literal)
#[derive(Debug, PartialEq, Eq)]
pub enum Op {
Var(Var),
Val(Val),
}
impl Op {
/// evaluate operand
pub fn eval(&self, vars: &Memory) -> Val {
match self {
Op::Var(v) => vars[*v],
Op::Val(v) => *v,
}
}
}
impl std::str::FromStr for Op {
type Err = std::num::ParseIntError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if let Ok(v) = s.parse() {
Ok(Op::Var(v))
} else {
s.parse().map(|v| Op::Val(v))
}
}
}
/// instruction
#[derive(Debug, PartialEq, Eq)]
pub enum Inst {
Inp(Var),
Add(Var, Op),
Mul(Var, Op),
Div(Var, Op),
Mod(Var, Op),
Eql(Var, Op),
}
impl Inst {
/// execute instruction
pub fn execute(&self, vars: &mut Memory, inputs: &mut impl Iterator<Item = Val>) {
match self {
Inst::Inp(v1) => vars[*v1] = inputs.next().expect("No input"),
Inst::Add(v1, v2) => vars[*v1] += v2.eval(vars),
Inst::Mul(v1, v2) => vars[*v1] *= v2.eval(vars),
Inst::Div(v1, v2) => vars[*v1] /= v2.eval(vars),
Inst::Mod(v1, v2) => vars[*v1] = vars[*v1] % v2.eval(vars),
Inst::Eql(v1, v2) => vars[*v1] = (vars[*v1] == v2.eval(vars)) as Val,
}
}
}
impl std::str::FromStr for Inst {
type Err = Box<dyn std::error::Error>;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut parts = s.trim().split(" ");
let name = parts
.next()
.ok_or_else(|| format!("'{}' contains no name part", s))?;
let first = parts
.next()
.ok_or_else(|| format!("'{}' contains no first operand", s))?
.parse()
.map_err(|()| format!("First part of '{}' is not a variable", s))?;
let second = parts.next().map_or(Ok(None), |s| s.parse().map(Some))?;
let second = || second.ok_or_else(|| format!("'{}' contains no second operand", s));
match name {
"inp" => Ok(Self::Inp(first)),
"add" => Ok(Self::Add(first, second()?)),
"mul" => Ok(Self::Mul(first, second()?)),
"div" => Ok(Self::Div(first, second()?)),
"mod" => Ok(Self::Mod(first, second()?)),
"eql" => Ok(Self::Eql(first, second()?)),
_ => Err(format!("'{}' is not a valid instruction name", name))?,
}
}
}
/// parse lines to instructions
pub fn parse(content: &str) -> Vec<Inst> {
content.lines().map(|line| line.parse().unwrap()).collect()
}
/// execute instructions
pub fn execute(instructions: &[Inst], inputs: &[Val]) -> Val {
let mut vars = Memory::default();
let mut inputs = inputs.iter().copied();
for instruction in instructions {
instruction.execute(&mut vars, &mut inputs)
}
vars[Var::Z]
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT: &str = "inp z
inp x
mul z 3
eql z x";
const INSTRUCTIONS: &[Inst] = &[
Inst::Inp(Var::Z),
Inst::Inp(Var::X),
Inst::Mul(Var::Z, Op::Val(3)),
Inst::Eql(Var::Z, Op::Var(Var::X)),
];
#[test]
fn test_parse() {
assert_eq!(INSTRUCTIONS, parse(CONTENT));
}
}
Rust solution to AoC|2021|25.
A simple one to conclude this year’s edition…
Not much to say here. By looking in both, the initial and the updated grid for updating the south facing cucumbers, I avoid creating two copies of the grid in each step. Still ~100ms computation time…
Here is my solution:
pub fn parse(content: &str) -> (Vec<C>, usize) {
let w = content.lines().next().expect("No content").trim().len();
let grid = content
.chars()
.filter(|c| !c.is_whitespace())
.map(|c| match c {
'>' => 1,
'v' => 2,
_ => 0,
})
.collect();
(grid, w)
}
pub const E: C = 1;
pub const S: C = 2;
pub const X: C = 0;
pub fn step(grid: &[C], w: usize) -> Option<Vec<C>> {
let mut grid_upd = vec![X; grid.len()];
let mut moved = false;
let h = grid.len() / w;
for x in 0..w {
let x_upd = (x + 1) % w;
for y in 0..h {
let k = x + w * y;
if grid[k] == E {
let k_upd = x_upd + w * y;
if grid[k_upd] == X {
grid_upd[k_upd] = E;
moved = true;
} else {
grid_upd[k] = E;
}
}
}
}
for y in 0..h {
let y_upd = (y + 1) % h;
for x in 0..w {
let k = x + w * y;
if grid[k] == S {
let k_upd = x + w * y_upd;
if grid[k_upd] != S && grid_upd[k_upd] == X {
grid_upd[k_upd] = S;
moved = true;
} else {
grid_upd[k] = S;
}
}
}
}
if moved {
Some(grid_upd)
} else {
None
}
}
pub fn solution_1(mut grid: Vec<C>, w: usize) -> usize {
let mut count = 1;
while let Some(upd) = step(&grid, w) {
grid = upd;
count += 1;
}
count
}
#[cfg(test)]
mod tests {
use super::*;
const CONTENT_00: &str = "v...>>.vv>
.vv>>.vv..
>>.>v>...v
>>v>>.>.v.
v>v.vv.v..
>.>>..v...
.vv..>.>v.
v.v..>>v.v
....v..v.>";
const CONTENT_01: &str = "....>.>v.>
v.v>.>v.v.
>v>>..>v..
>>v>v>.>.v
.>v.v...v.
v>>.>vvv..
..v...>>..
vv...>>vv.
>.v.v..v.v";
const CONTENT_10: &str = "..>..>>vv.
v.....>>.v
..v.v>>>v>
v>.>v.>>>.
..v>v.vv.v
.v.>>>.v..
v.v..>v>..
..v...>v.>
.vv..v>vv.";
const GRID_00: &[C] = &[
S, X, X, X, E, E, X, S, S, E, X, S, S, E, E, X, S, S, X, X, E, E, X, E, S, E, X, X, X, S,
E, E, S, E, E, X, E, X, S, X, S, E, S, X, S, S, X, S, X, X, E, X, E, E, X, X, S, X, X, X,
X, S, S, X, X, E, X, E, S, X, S, X, S, X, X, E, E, S, X, S, X, X, X, X, S, X, X, S, X, E,
];
const W: usize = 10;
#[test]
fn test_parse() {
let (grid, w) = parse(CONTENT_00);
assert_eq!(W, w);
assert_eq!(GRID_00, grid);
}
#[test]
fn test_step() {
let grid_00 = GRID_00.to_owned();
let (grid_01, _) = parse(CONTENT_01);
let (grid_10, _) = parse(CONTENT_10);
assert_eq!(Some(grid_01), step(&grid_00, W));
let mut grid = grid_00;
for _ in 0..10 {
grid = step(&grid, W).unwrap();
}
assert_eq!(grid_10, grid)
}
#[test]
fn test_solution_1() {
assert_eq!(58, solution_1(GRID_00.to_owned(), W));
}
}