razziel89
: razziel89
razziel89 |
This is my implementation for both rounds of the calorie counting puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.go
, which contains helper functions related to input
parsing and output processing.
main.rs
use anyhow::{Error, Result};
// Constants.
const NUM_ELVES: usize = 3;
fn solve(file: &str) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let baggage = io::parse_lines_to_data::<data::Baggage>(file, "baggage")?;
let mut elves = data::baggages_to_elves(baggage);
// Elf carrying the most calories will be first in line. It is inefficient to calculate total
// calories for every comparison, but it's not really important for this exercise.
elves.sort_by(|el1, el2| el2.total_calories().cmp(&el1.total_calories()));
match elves.len() {
// Even though we could solve part 1 if we had 1..=2 elves, we ignore that case here.
0..=2 => Err(Error::msg("somehow, we found too few elves :(")),
_ => {
// Part 1.
println!(
"elf carrying the most is num {} who carries {} calories",
elves[0].get_idx(),
elves[0].total_calories()
);
// Part 2.
let total_calories: u64 = elves
.iter()
.take(NUM_ELVES)
.map(|el| el.total_calories())
.sum();
println!(
"the {} elves carrying the most carry {} in total\n",
NUM_ELVES, total_calories
);
Ok(())
}
}
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub enum Baggage {
Calories(u64),
EndOfElf,
}
#[derive(Debug)]
pub struct Elf {
idx: usize,
baggage: Vec<Baggage>,
}
impl FromStr for Baggage {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"" => Ok(Baggage::EndOfElf),
val => Ok(Baggage::Calories(val.parse::<u64>()?)),
}
}
}
impl Elf {
pub fn total_calories(&self) -> u64 {
self.baggage
.iter()
.map(|el| match el {
Baggage::Calories(val) => val,
Baggage::EndOfElf => &0,
})
.sum()
}
pub fn get_idx(&self) -> usize {
self.idx
}
}
pub fn baggages_to_elves(baggage: Vec<Baggage>) -> Vec<Elf> {
let mut elves = vec![];
let mut elf = Elf {
idx: 1,
baggage: vec![],
};
for el in baggage {
match el {
Baggage::Calories(_) => {
elf.baggage.push(el);
}
Baggage::EndOfElf => {
let next_elf = Elf {
idx: elf.idx + 1,
baggage: vec![],
};
elves.push(elf);
elf = next_elf;
}
}
}
elves
}
There are currently no tests.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
This is my implementation for both rounds of the RPS puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
This one was pretty straightforward. I might have taken enums in Rust a bit far, here, but I wanted to use them since Go, the language used last year, doesn’t have enums. I also tried out a custom trait so that I could use generics to determine which round it was.
main.rs
use anyhow::{Error, Result};
use std::str::FromStr;
// Constants.
// None yet.
fn solve<T>(file: &str) -> Result<()>
where
T: FromStr<Err = Error>,
T: data::Round,
{
eprintln!("PROCESSING {}", file);
let mut scores = (0, 0);
// Read file and convert into data.
let rounds = io::parse_lines_to_data::<T>(file, "rounds")?;
for round in rounds {
let round_scores = round.score();
scores = (scores.0 + round_scores.0, scores.1 + round_scores.1);
}
println!("scores are opponent: {}, you: {}\n", scores.0, scores.1);
Ok(())
}
fn main() -> Result<()> {
// Funny that the example for part 1 would end in a draw, but that's not mentioned anywhere.
solve::<data::RoundPart1>(SAMPLE)?;
solve::<data::RoundPart1>(REAL)?;
solve::<data::RoundPart2>(SAMPLE)?;
solve::<data::RoundPart2>(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub enum RPS {
R,
P,
S,
}
#[derive(Debug)]
pub enum Outcome {
Win,
Loss,
Draw,
}
// This trait is used so that we don't have to care which round we're scoring.
pub trait Round {
fn score(&self) -> (usize, usize);
}
#[derive(Debug)]
pub struct RoundPart1 {
other: RPS,
me: RPS,
}
#[derive(Debug)]
pub struct RoundPart2 {
other: RPS,
outcome: Outcome,
}
impl FromStr for RPS {
type Err = Error;
// This parser can be used for rounds 1 and 2.
fn from_str(s: &str) -> Result<Self> {
match s {
"A" | "X" => Ok(RPS::R),
"B" | "Y" => Ok(RPS::P),
"C" | "Z" => Ok(RPS::S),
_ => Err(Error::msg(format!("cannot parse {} as RPS", s))),
}
}
}
impl FromStr for Outcome {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"X" => Ok(Outcome::Loss),
"Y" => Ok(Outcome::Draw),
"Z" => Ok(Outcome::Win),
_ => Err(Error::msg(format!("cannot parse {} as Outcome", s))),
}
}
}
impl FromStr for RoundPart1 {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split_whitespace().collect::<Vec<_>>().as_slice() {
[other, me] => {
// Other error.
if !["A", "B", "C"].contains(other) {
Err(Error::msg(format!("unknown value {} for other", other)))
// Me error.
} else if !["X", "Y", "Z"].contains(me) {
Err(Error::msg(format!("unknown value {} for me", me)))
// Success case.
} else {
Ok(RoundPart1 {
other: other.parse::<RPS>()?,
me: me.parse::<RPS>()?,
})
}
}
_ => Err(Error::msg(format!("cannot parse {}", s))),
}
}
}
impl FromStr for RoundPart2 {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split_whitespace().collect::<Vec<_>>().as_slice() {
[other, result] => {
// Other error.
if !["A", "B", "C"].contains(other) {
Err(Error::msg(format!("unknown value {} for other", other)))
// Success case.
} else {
Ok(RoundPart2 {
other: other.parse::<RPS>()?,
outcome: result.parse::<Outcome>()?,
})
}
}
_ => Err(Error::msg(format!("cannot parse {}", s))),
}
}
}
impl Outcome {
fn score(&self) -> usize {
match &self {
Outcome::Loss => 0,
Outcome::Draw => 3,
Outcome::Win => 6,
}
}
}
impl RPS {
fn score(&self) -> usize {
match self {
RPS::R => 1,
RPS::P => 2,
RPS::S => 3,
}
}
// Needed for round 1. This function could benefit from testing so that we know the result if
// called with (a, b) and (b, a) make sense.
fn check_win(&self, other: &RPS) -> Outcome {
match (self, other) {
// Self rock.
(RPS::R, RPS::R) => Outcome::Draw,
(RPS::R, RPS::P) => Outcome::Loss,
(RPS::R, RPS::S) => Outcome::Win,
// Self paper.
(RPS::P, RPS::R) => Outcome::Win,
(RPS::P, RPS::P) => Outcome::Draw,
(RPS::P, RPS::S) => Outcome::Loss,
// Self scissors.
(RPS::S, RPS::R) => Outcome::Loss,
(RPS::S, RPS::P) => Outcome::Win,
(RPS::S, RPS::S) => Outcome::Draw,
}
}
// Needed for round 2. This is called on the other's value with a desired outcome.
fn get_reply(&self, result: &Outcome) -> RPS {
match (self, result) {
// Rock.
(RPS::R, Outcome::Loss) => RPS::S,
(RPS::R, Outcome::Draw) => RPS::R,
(RPS::R, Outcome::Win) => RPS::P,
// Paper.
(RPS::P, Outcome::Loss) => RPS::R,
(RPS::P, Outcome::Draw) => RPS::P,
(RPS::P, Outcome::Win) => RPS::S,
// Scissors.
(RPS::S, Outcome::Loss) => RPS::P,
(RPS::S, Outcome::Draw) => RPS::S,
(RPS::S, Outcome::Win) => RPS::R,
}
}
}
// It turns out we do not need to track the other's score, but we only knew that after the fact...
impl Round for RoundPart1 {
fn score(&self) -> (usize, usize) {
(
self.other.score() + self.other.check_win(&self.me).score(),
self.me.score() + self.me.check_win(&self.other).score(),
)
}
}
impl Round for RoundPart2 {
fn score(&self) -> (usize, usize) {
let reply = self.other.get_reply(&self.outcome);
(
self.other.score() + self.other.check_win(&reply).score(),
reply.score() + reply.check_win(&self.other).score(),
)
}
}
There are currently no tests.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of the rucksack reorganization puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
This one was straightforward, but I am not too happy with how the solution
looks.
Working with the HashSet
type was not as easy as I thought, e.g. when trying
to compute the overlap of multiple sets.
Furthermore, I somehow misunderstood part 2 at first, which lead me on a wild
goose chase.
Still, this works, but I lack the time to provid more details about the
implementation.
main.rs
use anyhow::{Error, Result};
// Constants.
// None yet.
fn ord(c: char) -> Result<usize> {
match c {
'a'..='z' => Ok((c as usize - 'a' as usize) + 1),
'A'..='Z' => Ok((c as usize - 'A' as usize) + 27),
_ => Err(Error::msg(format!(
"invalid character {} for ord conversion",
c,
))),
}
}
fn solve(file: &str) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let rucksacks = io::parse_lines_to_data::<data::Rucksack>(file, "rucksack")?;
// Part 1.
let common = rucksacks
.iter()
.map(|el| (&el.left & &el.right).into_iter().collect::<Vec<_>>())
.enumerate()
.map(|(idx, el)| {
if el.len() == 1 {
ord(el[0])
} else {
Err(Error::msg(format!(
"entry {} has wrong length {}",
idx,
el.len()
)))
}
})
.collect::<Vec<_>>();
{
let mut has_err = false;
for c in common.as_slice() {
if let Err(err) = c {
eprintln!("{:?}", err);
has_err = true;
}
}
if has_err {
return Err(Error::msg("encountered at least one error"));
}
}
println!(
"part 1, total value is {}",
common.iter().flatten().sum::<usize>()
);
// Part 2.
let badges = rucksacks
.iter()
.map(|el| el.everything())
.collect::<Vec<_>>()
.chunks_exact(3)
.into_iter()
.map(|sets| {
if let [set1, set2, set3] = sets {
(&(set1 & set2) & set3).into_iter().collect::<Vec<_>>()
} else {
panic!("this will never happen due to the use of exact_chunk")
}
})
.enumerate()
.map(|(idx, el)| {
if el.len() == 1 {
ord(el[0])
} else {
Err(Error::msg(format!(
"entry {} has wrong length {}",
idx,
el.len()
)))
}
})
.collect::<Vec<_>>();
println!(
"part 2, total value is {}",
badges.iter().flatten().sum::<usize>()
);
Ok(())
}
fn main() -> Result<()> {
// Funny that the example for part 1 would end in a draw, but that's not mentioned anywhere.
solve(SAMPLE)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
use std::str::FromStr;
#[derive(Debug)]
pub struct Rucksack {
pub left: HashSet<char>,
pub right: HashSet<char>,
}
impl Rucksack {
pub fn everything(&self) -> HashSet<char> {
&self.left | &self.right
}
}
impl FromStr for Rucksack {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let left = str_to_set(&s[..s.len() / 2]);
let right = str_to_set(&s[s.len() / 2..]);
Ok(Rucksack { left, right })
}
}
fn str_to_set(s: &str) -> HashSet<char> {
HashSet::from_iter(s.chars())
}
There are currently no tests.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of the camp cleanup puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
This one was straightforward, which is why I tried my luck with a generic pair type. It worked out nicely as I only had to implemeng the parsing logic once but could use it for two concrete types. Then, it was just a matter of creating the methods that compute whether there is a full (part 1) or partial (part 2) overlap between two ranges and to check for overlaps in both directions (which is strictly not needed for part 2 but doesn’t hurt).
main.rs
use anyhow::{Error, Result};
// Constants.
// None yet.
fn solve(file: &str) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let pairs = io::parse_lines_to_data::<data::Pair>(file, "pair")?;
let full_overlaps = pairs
.iter()
.filter_map(|el| if el.full_overlap() { Some(el) } else { None })
.count();
println!("there are {} full overlaps", full_overlaps);
let partial_overlaps = pairs
.iter()
.filter_map(|el| if el.partial_overlap() { Some(el) } else { None })
.count();
println!("there are {} partial overlaps", partial_overlaps);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub struct GenPair<T, const SEP: char> {
pub left: T,
pub right: T,
}
#[derive(Debug)]
pub struct Num(usize);
impl FromStr for Num {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
Ok(Self(s.parse::<usize>()?))
}
}
impl<T, const SEP: char> FromStr for GenPair<T, { SEP }>
where
T: FromStr<Err = Error>,
{
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split(SEP).collect::<Vec<_>>().as_slice() {
[left_str, right_str] => Ok(Self {
left: left_str.parse::<T>()?,
right: right_str.parse::<T>()?,
}),
_ => Err(Error::msg(format!(
"cannot parse {} as pair with sep {}",
s, SEP
))),
}
}
}
pub type Range = GenPair<Num, '-'>;
pub type Pair = GenPair<Range, ','>;
impl Range {
fn contains(&self, other: &Range) -> bool {
self.left.0 <= other.left.0 && self.right.0 >= other.right.0
}
fn overlap(&self, other: &Range) -> bool {
(self.left.0 <= other.right.0 && self.left.0 >= other.left.0)
|| (self.right.0 >= other.left.0 && self.right.0 <= other.right.0)
}
}
impl Pair {
pub fn full_overlap(&self) -> bool {
self.left.contains(&self.right) || self.right.contains(&self.left)
}
pub fn partial_overlap(&self) -> bool {
self.left.overlap(&self.right) || self.right.overlap(&self.left)
}
}
There are currently no tests.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of the supply stacks puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Once the inputs have been parsed, this one was straightforward to solve. You simply have to follow the instructions. Solving part 2 was particulary nice as I could just pass in a different crate movement function. I took the lazy approach for part 2 and implemented it via a temporary stack as intermediary.
Parsing the stacks was a bit harder, on the other hand, because there was no separator string in place that would make it possible to easily distinguish the different stacks. I was tempted to preprocess the input via a shell script or even manually, which would have simplified parsing. But then I realised that a filter function each could be used to extract all the lines belonging to either the stack definition or the definition of movement instructions. The use of inexact chunking then made it possible to easily read in the different stacks.
Note that I have used two types to represent stacks: Vec<StackLine>
, which is
the type representing the stack input, and Vec<Stack>
, which is basically the
transpose of the first type.
Maybe using an actual matrix would have been a beter idea, but this works, too.
I noted that many of the terms used in this challenge are reserved words in rust such as "crate" or "move".
main.rs
use anyhow::{Error, Result};
// Constants.
// None yet.
fn lines_to_stacks(lines: &Vec<data::StackLine>) -> Result<Vec<data::Stack>> {
if let Some(num_stacks) = lines.iter().map(|el| el.stacks.len()).max() {
let mut stacks = vec![];
for stack_idx in 0..num_stacks {
let mut stack: data::Stack = vec![];
// We reverse the iterator because we obtained the lines from top to bottom but we need
// to build the stacks from the ground up. Thus, we iterate from the ground to the
// bottom.
for line in lines.iter().rev() {
// We cannot be sure that every stack line contains the same number of entries.
// Thus, we use the ".get" method to be able to catch the case where one line ends
// before another. It turns out that every line has the same number of entries,
// making this safeguard unnecessary...
if let Some(Some(elem)) = line.stacks.get(stack_idx) {
stack.push(*elem);
}
}
stacks.push(stack);
}
Ok(stacks)
} else {
Err(Error::msg("only empty stacks obtained"))
}
}
fn check_bounds(idx: usize, len: usize, name: &str) -> Result<()> {
if idx > len {
return Err(Error::msg(format!(
"{} stack {} is out of bounds",
name, idx
)));
} else {
Ok(())
}
}
fn apply_move_part1(stacks: &mut Vec<data::Stack>, mov: &data::Move) -> Result<()> {
// Thanks to these two bounds checks, we know that the index operations below will never panic.
check_bounds(mov.src, stacks.len(), "source")?;
check_bounds(mov.dest, stacks.len(), "dest")?;
for _ in 0..mov.num {
if let Some(moved_elem) = &stacks[mov.src].pop() {
stacks[mov.dest].push(*moved_elem);
} else {
return Err(Error::msg(format!("cannot apply move {:?}", mov)));
}
}
Ok(())
}
fn apply_move_part2(stacks: &mut Vec<data::Stack>, mov: &data::Move) -> Result<()> {
// Thanks to these two bounds checks, we know that the index operations below will never panic.
check_bounds(mov.src, stacks.len(), "source")?;
check_bounds(mov.dest, stacks.len(), "dest")?;
// We are being lazy and are using a temporary stack to stash the crates away. That way, we
// keep the order intact when putting them back from the temporary stash to the final stash.
let mut temp_stack: data::Stack = vec![];
for _ in 0..mov.num {
if let Some(moved_elem) = &stacks[mov.src].pop() {
temp_stack.push(*moved_elem);
} else {
return Err(Error::msg(format!(
"cannot apply 1st half of move {:?}",
mov
)));
}
}
for _ in 0..mov.num {
if let Some(moved_elem) = &temp_stack.pop() {
stacks[mov.dest].push(*moved_elem);
} else {
return Err(Error::msg(format!(
"cannot apply 2nd half of move {:?}",
mov
)));
}
}
Ok(())
}
fn solve(
file: &str,
apply_move: fn(&mut Vec<data::Stack>, &data::Move) -> Result<()>,
) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let moves =
io::parse_lines_to_data::<data::Move>(file, "move", Some(|el| el.contains("move")), None)?;
let stack_lines = io::parse_lines_to_data::<data::StackLine>(
file,
"stack line",
Some(|el| el.contains("[")),
None,
)?;
let mut stacks = lines_to_stacks(&stack_lines)?;
for mov in &moves {
apply_move(&mut stacks, mov)?;
}
let mut errs = vec![];
println!(
"the top elements are: {}",
stacks
.iter()
.enumerate()
.map(|(idx, el)| el
.last()
.map(|el| el.to_string())
.ok_or(Error::msg(format!("stack {} is empty", idx))))
.filter_map(|el| io::filter_and_remember_errs(el, &mut errs))
.collect::<Vec<_>>()
.join("")
);
io::process_remembered_errs(errs)
}
fn main() -> Result<()> {
solve(SAMPLE, apply_move_part1)?;
solve(REAL, apply_move_part1)?;
solve(SAMPLE, apply_move_part2)?;
solve(REAL, apply_move_part2)?;
Ok(())
}
data.rs
use crate::io;
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub struct Move {
pub num: usize,
pub src: usize,
pub dest: usize,
}
// We are using our own stack type here just so that the code is easier to read.
pub type Stack = Vec<char>;
// This is a temporary data type that we use to parse each line of the top part of the input.
#[derive(Debug)]
pub struct StackLine {
pub stacks: Vec<Option<char>>,
}
impl FromStr for Move {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split_whitespace().collect::<Vec<_>>().as_slice() {
["move", num, "from", src, "to", dest] => Ok(Self {
num: num.parse()?,
// We use zero-based indexing but the example uses one-based indxing. Thus, we
// convert here.
src: src
.parse::<usize>()?
.checked_sub(1)
.ok_or(Error::msg(format!("{} is not >1", src)))?,
dest: dest
.parse::<usize>()?
.checked_sub(1)
.ok_or(Error::msg(format!("{} is not >1", dest)))?,
}),
_ => Err(Error::msg(format!("cannot parse {} as move", s))),
}
}
}
// A hybrid between a result and an option.
pub enum Hybrid<T, E> {
Some(T),
Err(E),
None,
}
impl FromStr for StackLine {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let mut errs = vec![];
let stacks = s
.chars()
.collect::<Vec<_>>()
// Chunking is important here. Each stack entry contains at most 4 characters.
// Thus, by chunking this way, we make sure to get exactly one chunk per stack.
// Luckily, none of the stacks contains multi-letter crates ^^.
.chunks(4)
.map(|el| match el {
// Case with data, can be 3 or 4 characters long.
['[', ch, ']', ' '] | ['[', ch, ']'] => Hybrid::Some(ch.clone()),
// Case without data.
[' ', ' ', ' ', ' '] | [' ', ' ', ' '] => Hybrid::None,
// Error case.
_ => Hybrid::Err(Error::msg(format!("cannot parse line {} as stack line", s))),
})
.map(|el| match el {
Hybrid::Some(val) => Some(val),
Hybrid::Err(err) => {
errs.push(format!("{:?}", err));
None
}
Hybrid::None => None,
})
.collect::<Vec<_>>();
io::process_remembered_errs(errs).map(|_| Self { stacks })
}
}
io.rs
use anyhow::{Context, Error, Result};
use std::fmt::Debug;
use std::str::FromStr;
fn read_lines_from_file(path: &str) -> Result<Vec<String>> {
Ok(std::fs::read_to_string(path)
.context("reading from disk")?
.trim_end()
.split('\n')
.map(|el| String::from(el))
.collect())
}
pub type Predicate = fn(&String) -> bool;
pub type Transform = fn(String) -> String;
pub fn parse_lines_to_data<T>(
file: &str,
type_name: &str,
filter: Option<Predicate>,
transform: Option<Transform>,
) -> Result<Vec<T>>
where
T: FromStr<Err = Error>,
{
let filter_fn = filter.unwrap_or(|_| true);
let transformer = transform.unwrap_or(|el| el);
let mut errs: Vec<String> = vec![];
// Read file and convert into actions.
let data = read_lines_from_file(file)
.context("reading lines")?
.into_iter()
.filter(filter_fn)
.map(transformer)
.enumerate()
.filter_map(|(idx, el)| {
match el
.parse::<T>()
.with_context(|| format!("cannot parse line {} as {}: {}", idx, type_name, el))
{
Ok(val) => Some(val),
Err(err) => {
errs.push(format!("{:?}", err));
None
}
}
})
.collect();
if errs.len() == 0 {
Ok(data)
} else {
// Concatenate errors into one giant error message in case there were any in the file.
Err(Error::msg(errs.join("\n------------------\n")))
}
}
// Convert Result to Option but make sure to add all errors messages to a vector of strings. Use
// "process_errs" to check whethere there are any errors in the vector.
pub fn filter_and_remember_errs<I, E>(item: Result<I, E>, errs: &mut Vec<String>) -> Option<I>
where
E: Debug,
{
match item {
Ok(val) => Some(val),
Err(err) => {
errs.push(format!("{:?}", err));
None
}
}
}
// If there is any element in the string vector, concatenate all ements into an error. Do not
// return an error otherwise.
pub fn process_remembered_errs(errs: Vec<String>) -> Result<()> {
if errs.len() == 0 {
Ok(())
} else {
// Concatenate errors into one giant error message in case there were any in the file.
Err(Error::msg(errs.join("\n------------------\n")))
}
}
There are currently no tests.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of the touning trouble puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
For this one, input parsing was non-existent.
Processing, on the other hand, was a bit harder.
At first, I thought about implementing the uniqueness condition manually,
considering that we only had to compare four entries in a pairwise fashion.
I’m glad I didn’t do that, though, because of part two.
Instead, it became clear that a set (HashSet
in rust) could be used.
A set is a collection of unique entris. Imagine converting a list of somethings into a set of somethings The only way both the list and the set can have the same number of entries is if and only if all entries in the list are unique. That’s what this code uses.
My biggest struggle was rust’s ownership system and unsatisfied trait bounds that I didn’t even know existed. After the first functioning implementation, I cleaned the code up a bit. It’s surprisingly concise.
I was happy to have discovered the windows(size)
method usable with slices
that produces an iterator over overlapping chunks of size
elements.
Now, I only had to check each of them for uniqueness.
That involved some slice-vector-iterator comversions, which appear to be very
common in rust.
main.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
// Constants.
fn solve(file: &str, win: usize) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file.
let lines = io::read_lines_from_file(file)?;
for line in lines {
let first_match = line
.chars()
.collect::<Vec<_>>()
.as_slice()
.windows(win)
.enumerate()
.filter_map(|(idx, el)| {
// If the size of a set is equal to the window size, then we have only unique
// entries. There is no other way.
if HashSet::<char>::from_iter(el.to_vec().into_iter()).len() == win {
Some(idx)
} else {
None
}
})
.take(1)
.next()
.ok_or(Error::msg("cannot find matching entry"))?;
println!("first line that fits: {}", first_match + win)
}
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE, 4)?;
solve(REAL, 4)?;
solve(SAMPLE, 14)?;
solve(REAL, 14)?;
Ok(())
}
There are currently no tests.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of this day’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
This solution is not pretty but I kind of messed up a bit at first and then ran out of time. A lack of sleep could have contributed.
Parsing the input into a native data structure was straightforward thanks to rust’s powerful enums. But then the problems started. For some reason, I decided to implement a fake directory structure manually as an excercise and lost a lot of time that way. As an alternative, I decided to take an idea from rsync’s cookbook and represent files and directories a strings but ensure that direcotries always end with a slash. That way, I could have one set mapping paths to sizes but could easily determine whether something was a file or a directory.
Then, the process was "just" building such a set of paths, finding all files
under a directory, and applying some operations to that.
I’m not happy with how the solution turned out but it works.
I was surprised by how much special care is needed when working with the root
directory /
.
main.rs
use anyhow::{Error, Result};
use std::collections::HashMap;
// Constants.
// Part 1.
const MAX_SIZE: usize = 100000;
// Part 2.
const TOTAL_SIZE: usize = 70000000;
const REQUIRED_SIZE: usize = 30000000;
// We don'tuse a tree to represent the file system because trees are hard. Instead, we use a map
// mapping slash-separated strings to size values. An entry ending in a slash is a directory. That
// way, we can build everything up at once.
fn build_fs(entries: Vec<data::Entry>) -> Result<HashMap<String, usize>> {
let mut cwd = data::Stack::new();
let mut fs = HashMap::<String, usize>::new();
// This boolean serves as a way to check that we retrieve listed values only after the ls
// command has been issued. It's just a sanity check for the input.
let mut listing = false;
fs.insert("/".to_string(), 0);
// Build up the file system.
for entry in entries {
match entry {
data::Entry::CD(dir) => {
listing = false;
match dir.as_str() {
".." => cwd.popd(),
"/" => cwd.clear(),
_ => cwd.pushd(dir),
}
// Entries with a trailing slash are directories.
let dir = cwd.pwd();
if !fs.contains_key(&dir) {
fs.insert(dir, 0);
}
}
data::Entry::LS => {
listing = true;
}
data::Entry::DIR(dir) => {
if !listing {
return Err(Error::msg("found dir entry but not not in list mode"));
}
// Entries with a trailing slash are directories.
fs.insert(format!("{}{}/", cwd.pwd(), dir), 0);
}
data::Entry::FILE { name, size } => {
if !listing {
return Err(Error::msg("found file entry but not not in list mode"));
}
let dir = cwd.pwd();
if !fs.contains_key(&dir) {
return Err(Error::msg(format!("missing parent node {}", dir)));
}
// Entries without a trailing slash are files.
fs.insert(format!("{}{}", dir, name), size);
// Add directory sizes.
for dir in &mut cwd {
*fs.get_mut(&dir)
.ok_or(Error::msg(format!("cannot read directory {}", dir)))? += size;
}
}
}
}
Ok(fs)
}
fn solve(file: &str) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let entries = io::parse_lines_to_data::<data::Entry>(file, "entry", None, None)?;
let filesystem = build_fs(entries)?;
// Part 1.
let result_part1 = filesystem
.iter()
.filter_map(|(name, size)| {
if name.ends_with("/") && size <= &MAX_SIZE {
Some(size)
} else {
None
}
})
.sum::<usize>();
println!("requested size is {}", result_part1);
// Part 2.
// We already accumulated sizes, so getting this is easy. The amount of free space is the total
// size minus what we currently occupy, which is the size of the root directory.
// Note the use of checked_sub here because I wanted to try it out for subtracting from
// unsigned values. Those checked_* methods allow graceful handling of overflows. Without them,
// rust would panic if there was a violation of a type's value range.
let used_space = *filesystem
.get("/")
.ok_or(Error::msg("cannot retrieve used space"))?;
let free_space = TOTAL_SIZE
.checked_sub(used_space)
.ok_or(Error::msg("cannot compute free space"))?;
let required_space = REQUIRED_SIZE
.checked_sub(free_space)
.ok_or(Error::msg("cannot compute required space"))?;
eprintln!("need to free up at least {}", required_space);
// Find the smallest directory that fulfils that condition.
let min_free_size = filesystem
.iter()
.filter_map(|(name, size)| {
if name.ends_with("/") && size >= &required_space {
Some(size)
} else {
None
}
})
.min()
.ok_or(Error::msg("cannot find any directory for part 2"))?;
println!("freeing {} is enough", min_free_size);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE)?;
solve(REAL)?;
Ok(())
}
data.rs
use crate::io;
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub enum Entry {
CD(String),
LS,
DIR(String),
FILE { name: String, size: usize },
}
impl FromStr for Entry {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split_whitespace().collect::<Vec<_>>().as_slice() {
["$", "cd", dir] => Ok(Self::CD(dir.to_string())),
["$", "ls"] => Ok(Self::LS),
["dir", dir] => Ok(Self::DIR(dir.to_string())),
[size, name] => Ok(Self::FILE {
name: name.to_string(),
size: size.parse::<usize>()?,
}),
_ => Err(Error::msg(format!("canot parse {}", s))),
}
}
}
pub struct Stack {
it_idx: usize,
entries: Vec<String>,
}
impl Stack {
pub fn new() -> Self {
Self {
it_idx: 0,
entries: vec![],
}
}
pub fn pwd(&self) -> String {
if self.entries.len() == 0 {
"/".to_string()
} else {
format!("/{}/", self.entries.join("/"))
}
}
pub fn pushd(&mut self, dir: String) {
self.entries.push(dir);
}
pub fn popd(&mut self) {
// Ignore this here.
_ = self.entries.pop();
}
pub fn clear(&mut self) {
self.entries.clear();
}
}
impl Iterator for Stack {
type Item = String;
fn next(&mut self) -> Option<Self::Item> {
if self.it_idx > self.entries.len() {
self.it_idx = 0;
None
} else if self.it_idx == 0 {
self.it_idx += 1;
Some("/".to_string())
} else {
self.it_idx += 1;
Some(format!("/{}/", self.entries[0..self.it_idx - 1].join("/")))
}
}
}
This time, I’ve even added some tests for some helper functions. Those tests lets me ensure that any special handling of the root direcotry works as intended. Adding tests to a file in rust was pretty easy and a nice experience.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of this day’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
I decided not to go the obvious route and use a HashMap
mapping tree positions
to tree sizes instead.
I also increased all tree sizes by one so that I could use unsigned numbers for
tree sizes but still include the smallest trees.
The main part of this solution is a function that checks which trees are visible
and returns a HashSet
of the positions of all visible trees.
I can then take the union of those sets to solve part 1.
The aforementioned function can scan several parallel lines one after the other.
The nice thing about using a HashSet
is that I don’t have to cocern myself
with edges.
I can simply retrieve values until I can find none after displacing the checked
position and then stop iterating.
For part 2, I use the same function but modify the visibility condition. That is, the maximum allowed height is that of the tree I look at. Furthermore, we accept all trees smaller than our current tree. I find that unrealistic, which caused some delay. Imagine this (the tree height is at the top and the index at the bottom):
4 3 3 1 4 | | | | | | | | | | | | | | | 0 1 2 3 4
Tree 0 is the one we’re looking at. For part 2, this would mean there are 4 trees visible from 0, namely 1, 2, 3, and 4. But, in my view, tree 3 is covered by trees 1 and 2 and should not be visible. Anyway.
main.rs
use anyhow::{Error, Result};
use std::collections::{HashMap, HashSet};
// Constants.
// None yet.
// Count trees visible in a direction count_disp from all positions that can be reached from
// start_pos + n * outer_disp for all n that still yield a tree. If outer_disp is None, use only
// n==0. The search stops at the latest if no more trees can be found in that direction, assuming a
// dense forest.
//
// Whether a tree still counts as visible is defined by size_cmp. For part 1, it compares the
// current tree's size with the size of the largest tree found so far. For part 2, it always
// returns true because max_height has been set to that of the tree in consideration.
//
// A search in one direction stops early after a tree of max_height has been found because no trees
// behind it can be visible. The value for max_height differs between parts 1 and 2. For part 1,
// it's the global maximum and for part 2 it's the size of the tree in consideration.
fn count_visible(
forest: &HashMap<(i64, i64), data::Tree>,
start_pos: &data::Vec,
count_disp: &data::Vec,
outer_disp: Option<&data::Vec>,
max_height: &u8,
size_cmp: fn(&u8, &u8) -> bool,
) -> HashSet<(i64, i64)> {
let mut visible_forest = HashSet::<(i64, i64)>::new();
let mut outer_start = start_pos.clone();
// This will automatically stop if we cannot retrieve any more trees. Assuming a dense forest,
// that means once we reached the edge.
while let Some(_) = forest.get(&outer_start.pos()) {
let mut largest: u8 = 0;
let mut pos = outer_start.clone();
while &largest < max_height && let Some(tree) = forest.get(&pos.pos()) {
let size = tree.size();
// Remember the positions of trees that pass the size condition.
if size_cmp(&size, &largest) {
visible_forest.insert(pos.pos());
largest = size;
}
pos = pos.add(&count_disp);
}
// If we want to search in the same direction from different starting positions, update the
// starting position and go on searching. If not, end the outer loop early.
if let Some(disp) = outer_disp {
outer_start = outer_start.add(disp);
} else {
break;
}
}
visible_forest
}
fn solve(file: &str) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let forest = io::parse_chars_to_data::<data::Tree>(file, "tree")?;
// Part 1.
// Get dimensions of forest in all three directions. That could have been avoided by using some
// matrix structure but I wanted to use a HashMap here, so this is necessary.
let max_x = forest
.keys()
.map(|el| el.0)
.max()
.ok_or(Error::msg("cannot find max x index"))?;
let max_y = forest
.keys()
.map(|el| el.1)
.max()
.ok_or(Error::msg("cannot find max y index"))?;
let max_height = forest
.values()
.map(|val| val.size())
.max()
.ok_or(Error::msg("cannot find max height"))?;
// Compute union of all visible forests (or rather, tree positions).
let mut count = HashSet::<(i64, i64)>::new();
// Top border rightwards.
count = &count
| &count_visible(
&forest,
&data::Vec::new(0, 0),
&data::Vec::new(0, 1),
Some(&data::Vec::new(1, 0)),
&max_height,
|size, largest| size > largest,
);
// Left border downwards.
count = &count
| &count_visible(
&forest,
&data::Vec::new(0, 0),
&data::Vec::new(1, 0),
Some(&data::Vec::new(0, 1)),
&max_height,
|size, largest| size > largest,
);
// Bottom border rightwards.
count = &count
| &count_visible(
&forest,
&data::Vec::new(0, max_y),
&data::Vec::new(0, -1),
Some(&data::Vec::new(1, 0)),
&max_height,
|size, largest| size > largest,
);
// Right border downwards.
count = &count
| &count_visible(
&forest,
&data::Vec::new(max_x, 0),
&data::Vec::new(-1, 0),
Some(&data::Vec::new(0, 1)),
&max_height,
|size, largest| size > largest,
);
println!("visible are {} trees", count.len());
// Part 2.
let disps = vec![
data::Vec::new(0, -1),
data::Vec::new(-1, 0),
data::Vec::new(0, 1),
data::Vec::new(1, 0),
];
let best_view = forest
.iter()
.map(|(pos, height)| {
disps
.iter()
.map(|disp| {
let tree_pos = data::Vec::new(pos.0, pos.1);
count_visible(
&forest,
// Start searching at the first tree in the search direction from the
// starting position.
&tree_pos.add(&disp),
disp,
// Don't search along multiple parallel lines.
None,
&height.size(),
|_size, _largest| true,
)
.len()
})
// The scenic score for a tree is the product of the number of trees it can see in
// every direction.
.product::<usize>()
})
.max()
.ok_or(Error::msg("cannot find best view"))?;
println!("best view is {}", best_view);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub struct Tree(u8);
impl FromStr for Tree {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
// We increase the size by one to be able to perform simple unsigned size comparisons.
Ok(Self(
s.parse::<u8>()?
.checked_add(1)
.ok_or(Error::msg("cannot increase tree size"))?,
))
}
}
impl Tree {
pub fn size(&self) -> u8 {
self.0
}
}
#[derive(Debug, Clone)]
pub struct Vec {
x: i64,
y: i64,
}
impl Vec {
pub fn add(&self, other: &Vec) -> Self {
Self {
x: self.x + other.x,
y: self.y + other.y,
}
}
pub fn pos(&self) -> (i64, i64) {
(self.x, self.y)
}
pub fn new(x: i64, y: i64) -> Self {
Self { x, y }
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of this day’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Solving this was quite a bit of fun!
I created a Vec
class to represent 2d vectors and was quite happy to have
outsourced a lot of the logic into that class.
For example, the class offers a mv
method, which only ever lets it move by one
space, which is a safeguard against trying to move too far at once.
I haven’t tried moving by more than one space at a time because it seemed overly
complicated.
Furthermore, the custom Vec
class has an iter
method that provides an
iterator over unit-size steps (unit vectors in the sense of the manhattan
metric) that, if followed, ensure the same distance has been traveled as
described by the original vector.
That way, a simple iteration over iterators gave us all the steps we needed.
For solving part 2, one problem was borrowing two elements of the rope simultaneously while one was even borrowed mutably. That won’t work in rust. Instead, I use a temporary variable to keep track of the updated position of the previous knot in the rope.
I’ve also played around with lifetimes a bit.
main.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
// Constants.
// None yet.
// Define some helper functions that allow easy conversions from an Option<data::Vec> to a
// Result<data::Vec> because the latter lets us use the question mark operator for unobstrusive
// error forwarding.
fn tail(rope: &Vec<data::Vec>) -> Result<data::Vec> {
rope.last()
.map(|el| el.clone())
.ok_or(Error::msg("cannot get tail"))
}
// Yeah, playing with lifetimes.
fn get_mut<'a>(rope: &'a mut Vec<data::Vec>, idx: usize) -> Result<&'a mut data::Vec> {
rope.get_mut(idx)
.ok_or(Error::msg("cannot get element mutably"))
}
fn get<'a>(rope: &'a Vec<data::Vec>, idx: usize) -> Result<&'a data::Vec> {
rope.get(idx).ok_or(Error::msg("cannot get element"))
}
fn solve(file: &str, length: usize) -> Result<()> {
eprintln!("PROCESSING {} WITH LENGTH {}", file, length);
// Read file and convert into data.
let updates = io::parse_lines_to_data::<data::Vec>(file, "vec", None, None)?;
// All knots start at the very same position.
let mut rope = vec![data::NULL_VEC; length + 2];
let mut visited_by_tail = HashSet::<data::Vec>::new();
visited_by_tail.insert(tail(&rope)?);
// Part 1.
// The `iter` method for a vector provides an iterator over a set of unit-size steps that, if
// followed, will ensure that we have traveled the entire distance described by the vector.
for update in updates.iter().map(|el| el.iter()).flatten() {
// The mv method will make sure we never move the head farther than one space. This is just
// a safeguard for errors in the code.
get_mut(&mut rope, 0)?.mv(&update)?;
// Remember only the position of the reference vector, which is the head so far. That
// involves a clone.
let mut ref_vec = *get(&rope, 0)?;
// Update all others with reference to their previous entry.
for knot in rope.iter_mut() {
// Move the knot with respect to the reference knot, which is always the previous one.
*knot = knot.add(&ref_vec.get_tail_update(&knot));
// Update the reference knot's position. Because we can only ever borrow one element as
// mutable in rust and once we did so, we cannot borrow anything else, we clone it
// here. That's inefficient but I couldn't find an easy way around it without resorting
// to `unsafe`, which I want to avoid. Sadly, it seems as if `split_at_mut` involves
// `unsafe`.
ref_vec = *knot;
}
visited_by_tail.insert(tail(&rope)?);
}
println!(
"the tail visited {} unique spots for a rope of length {}",
visited_by_tail.len(),
rope.len(),
);
Ok(())
}
fn main() -> Result<()> {
// Part 1.
solve(SAMPLE1, 0)?;
solve(SAMPLE2, 0)?;
solve(REAL, 0)?;
// Part 2.
solve(SAMPLE1, 8)?;
solve(SAMPLE2, 8)?;
solve(REAL, 8)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug, Clone, Hash, PartialEq, Eq, Copy)]
pub struct Vec {
x: i64,
y: i64,
}
pub const NULL_VEC: Vec = Vec { x: 0, y: 0 };
impl FromStr for Vec {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s
.split_whitespace()
.collect::<std::vec::Vec<_>>()
.as_slice()
{
["R", dist] => Ok(Self {
x: dist.parse()?,
y: 0,
}),
["L", dist] => Ok(Self {
x: -dist.parse()?,
y: 0,
}),
["U", dist] => Ok(Self {
x: 0,
y: dist.parse()?,
}),
["D", dist] => Ok(Self {
x: 0,
y: -dist.parse()?,
}),
_ => Err(Error::msg(format!("cannot parse {} as vector", s))),
}
}
}
impl Vec {
pub fn add(&self, other: &Vec) -> Self {
Self {
x: self.x + other.x,
y: self.y + other.y,
}
}
// Length in infinity metric.
fn infinity_len(&self) -> usize {
self.x.abs() as usize + self.y.abs() as usize
}
fn is_infinity_unit(&self) -> bool {
self.infinity_len() == 1
}
// Map to the 2d unit sphere in manhattan metric. The null vector cannot be mapped and, thus,
// remains unchanged.
fn as_manhattan_unit(&self) -> Self {
Self {
x: self.x.clamp(-1, 1),
y: self.y.clamp(-1, 1),
}
}
// Move the vector exactly one space along one direction.
pub fn mv(&mut self, other: &Vec) -> Result<()> {
if other.is_infinity_unit() {
*self = self.add(other);
Ok(())
} else {
Err(Error::msg(format!("cannot move by {:?}", other)))
}
}
// Provide an iterator over unit-sized steps (unit in manhattan metric not infinity metric)
// that, if followed, describes the same distance traveled as `self`.
pub fn iter(&self) -> std::vec::IntoIter<Self> {
let unit = self.as_manhattan_unit();
let mut pos = self.as_manhattan_unit();
// As this is my second time working with a custom iterator, I was not sure how to avoid
// cloning here.
let mut disps = vec![];
while &pos != self {
let new_pos = pos.add(&unit);
disps.push(unit);
pos = new_pos;
}
disps.push(unit);
disps.into_iter()
}
pub fn get_tail_update(&self, other: &Self) -> Self {
let diff = Self {
x: self.x - other.x,
y: self.y - other.y,
};
// We want to update with a unit vector in manhattan metric, but only if that would not
// mean that `other` is on the same space as `self`.
if &other.add(&diff.as_manhattan_unit()) == self {
NULL_VEC
} else {
diff.as_manhattan_unit()
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
It’s getting really interesting now. I quite liked how part 2 didn’t require you to compute a number or a sequence of letters but instead required you to render something on screen. My solution is less nice, but there is not much time on the weekend.
Quite early on, I decided to ignore all the weirdness due to a noop taking one cycle an an addition taking two cycles. Instead, I moved to a parallel world where each instruction took exactly one cycle, which meant replacing each 2-cycle addition by a noop followed by a 1-cycle addition. So far so straightforward.
Furthermore, I ignored all the weirdness due to what is at the beginning, during, or at the end of a cycle, but instead only looked at the register during each cycle. I also had my 1-cycle addition act during its cycle instead of at the cycle’s end. I also started cycle counting at 0 instead of 1.
To map the register value during a cycle from my ficticious world back to the world of the puzzle, I only had to add 2 to the cycle count. That also means my world had no real cycle one, which meant I had to treat that one separately for part 2. For part one, in order to look at the value during cycle 20, I had to skip only the first 18 entries.
Today, I learnt that rust really will panic if you try to subtract 1 from an unsigned value that is 0. And off-by-one errors really are the two worst types of bugs that plague software development. I also played around with closures that move values into them for the first time. They are quite useful!
main.rs
use anyhow::Result;
// Constants.
const WIDTH: usize = 40;
fn render(crt: &Vec<bool>, width: usize, fill: char) -> String {
crt.iter()
.enumerate()
.map(|(idx, el)| {
let ch = if *el { '#' } else { fill };
if (idx + 1) % width == 0 {
format!("{}\n", ch)
} else {
ch.to_string()
}
})
.collect::<String>()
}
fn maybe_draw(crt: &mut Vec<bool>, reg: isize, cycle: usize, width: usize) {
let pixel_idx = cycle - 1;
let horizontal_pos = (pixel_idx % width) as isize;
// Used to avoid drawing past the edges. It turns out the register value is nice and those
// checks would not have been needed.
let sprite_at_left_edge = reg == 0;
let sprite_at_right_edge = reg == width as isize - 1;
if reg == horizontal_pos {
crt[pixel_idx] = true;
} else if reg + 1 == horizontal_pos && !sprite_at_right_edge {
crt[pixel_idx] = true;
} else if reg - 1 == horizontal_pos && !sprite_at_left_edge {
crt[pixel_idx] = true;
}
}
// This function likely performs a lot of allocations that are not needed but it makes the rest of
// the problem so much easier to solve.
fn extend(input: Vec<data::Op>) -> Vec<data::Op> {
input
.into_iter()
.map(|el| match el {
data::Op::None => vec![data::Op::None].into_iter(),
data::Op::Some(_) => vec![data::Op::None, el].into_iter(),
})
.flatten()
.collect::<Vec<_>>()
}
fn solve(file: &str, part1: bool, fill: char) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let ops =
io::parse_chunks_to_data::<data::Op>(io::read_lines_from_file(file, 1)?, "op", None, None)?;
// We avoid that one-cycle-two-cycle weridness by replacing each addx operation by a noop and
// an addx operation that is assumed to take only one cycle.
let extended_ops = extend(ops);
let mut reg = 1;
let reg_vals = extended_ops.into_iter().map(move |el| {
reg += match el {
data::Op::None => 0,
data::Op::Some(val) => val,
};
reg
});
if part1 {
// There is a lot of potential for off-by-one errors in this one.
//
// We skip the first 18 entries here because we want to start with the value during cycle
// 20, which is the value after cycle 19, and this solution ignores what happens during the
// first cycle, because that is trivial.
let skip = 18;
let mut skipper: isize = -1;
let interesting = reg_vals
.skip(skip)
.enumerate()
.filter_map(move |(step, reg)| {
// Now convert from our weird way of counting to that of the puzzle.
// We wanted to skip the first 19 cycles (add skip + 1) and cycle counting starts
// at one (add 1).
let current_cycle = step + 1 + skip + 1;
skipper += 1;
if skipper % 40 == 0 {
Some(reg * current_cycle as isize)
} else {
None
}
})
.sum::<isize>();
println!("{:?}", interesting);
} else {
let mut crt = vec![false; WIDTH * 6];
// We need to handle the first cycle separately because the cycle number used here is
// always that of the cycle we are in. During the first cycle, the register has a value of
// 1.
maybe_draw(&mut crt, 1, 1, WIDTH);
// Here, current_cycle is the cycle we are currently in. Thus, the number is one larger
// than what the example shows because the example usually taks about the value after a
// cycle but the value during a cycle is important. This also erroneously assumes the
// existence 241'th cycle, but that's not really a problem because the value after the
// 240'th cycle is the value during the 241th cycle, so it's consistent.
for (current_cycle, reg) in reg_vals.enumerate().map(|(step, el)| (step + 2, el)) {
maybe_draw(&mut crt, reg, current_cycle, WIDTH);
}
println!("\n{}\n", render(&crt, WIDTH, fill));
}
Ok(())
}
fn main() -> Result<()> {
// Part 1.
// The last argument is not important here.
solve(SAMPLE1, true, '.')?;
solve(SAMPLE2, true, '.')?;
solve(REAL, true, '.')?;
// Part 2.
solve(SAMPLE2, false, '.')?;
// Use a space as filler for the real solution to make the letters easier to read.
solve(REAL, false, ' ')?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug, Clone, Hash, PartialEq, Eq, Copy)]
pub enum Op {
None,
Some(isize),
}
impl FromStr for Op {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s
.split_whitespace()
.collect::<std::vec::Vec<_>>()
.as_slice()
{
["noop"] => Ok(Self::None),
["addx", val] => Ok(Self::Some(val.parse()?)),
_ => Err(Error::msg(format!("cannot parse {} as op", s))),
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
No time to write much.
The code has comments that should explain the solution.
In part 1, you simply have to code what the task says.
In part 2, the realisation helps that you can display any number as c=n*p+m
with
n
being a natural number.
Then, you can subtract as many multiples of p
from c
and not change
divisibility conditions, especially since all p
are prime numbers here.
If you pick p
to the the product of all unique prime numbers against which
divisibility is being checked, you can keep your numbers small.
main.rs
use anyhow::Result;
use std::collections::HashSet;
// Constants.
fn solve(file: &str, part1: bool) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let mut monkeys = io::parse_chunks_to_data::<data::Monkey>(
io::read_lines_from_file(file, 7)?,
"monkey",
None,
None,
)?;
let prod_of_div_vals = monkeys
.iter()
.map(|el| el.get_div())
.collect::<HashSet<_>>()
.into_iter()
.product::<isize>();
if !part1 {
for monkey in &mut monkeys {
monkey.set_all_divs(prod_of_div_vals);
}
}
let rounds = if part1 { 20 } else { 10_000 };
for _round in 0..rounds {
for monkey_idx in 0..monkeys.len() {
// println!("check: {:?}", monkeys[monkey_idx]);
for (target, item) in monkeys[monkey_idx].inspect_and_toss().into_iter() {
monkeys[target].catch(item);
// println!("toss: {:?} <- {}", monkeys[target], item);
}
// println!("\n")
}
}
monkeys.sort_by(|monkey1, monkey2| monkey1.how_active().cmp(&monkey2.how_active()));
monkeys.reverse();
println!(
"most active: {} & {}",
monkeys[0].whoami(),
monkeys[1].whoami(),
);
println!(
"monkey business is {}",
monkeys[0].how_active() * monkeys[1].how_active()
);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1, true)?;
solve(REAL, true)?;
solve(SAMPLE1, false)?;
solve(REAL, false)?;
Ok(())
}
data.rs
use anyhow::{Context, Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub struct Monkey {
idx: usize,
activity: usize,
items: Vec<isize>,
op: QuadraticOp,
test: DivisitilibytTest,
}
impl Monkey {
pub fn inspect_and_toss(&mut self) -> Vec<(usize, isize)> {
let result = self
.items
.iter()
.map(|item_val| {
self.activity += 1;
let new_item_val = self.op.apply(item_val);
(self.test.which_monkey(&new_item_val), new_item_val)
})
.collect::<Vec<_>>();
self.items = vec![];
result
}
pub fn catch(&mut self, item: isize) {
self.items.push(item);
}
pub fn whoami(&self) -> usize {
self.idx
}
pub fn how_active(&self) -> usize {
self.activity
}
pub fn set_all_divs(&mut self, prod: isize) {
self.op.prod = Some(prod);
}
pub fn get_div(&self) -> isize {
self.test.div_val
}
}
// All operations can be realised as a*x^2 + b*x + c
#[derive(Debug)]
struct QuadraticOp {
a: isize,
b: isize,
c: isize,
prod: Option<isize>,
}
impl QuadraticOp {
fn apply(&self, val: &isize) -> isize {
if let Some(prod) = self.prod {
// We update our worry level but don't divide by anything. Instead, to keep the numbers
// small and avoid weirdness due to divisibility checks, we take the modulo with
// respect to the product of all unique divisibility checks. Doing so never influences
// any of the divisibility checks.
(self.a * val * val + self.b * val + self.c) % prod
} else {
// We update our worry level but always divide by 3 in the end. This is for part 1.
(self.a * val * val + self.b * val + self.c) / 3
}
}
}
#[derive(Debug)]
struct DivisitilibytTest {
div_val: isize,
true_monkey: usize,
false_monkey: usize,
}
impl DivisitilibytTest {
fn which_monkey(&self, val: &isize) -> usize {
if val % self.div_val == 0 {
self.true_monkey
} else {
self.false_monkey
}
}
}
// This one is not pretty but it works and correctly reports errors. More context can always be
// added if there are unexpected errors.
impl FromStr for Monkey {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let lines = s
.split("\n")
.map(|el| el.trim())
.collect::<std::vec::Vec<_>>();
let idx = if let ["Monkey", idx_str] =
lines[0].split_whitespace().collect::<Vec<_>>().as_slice()
{
idx_str.trim_end_matches(":").parse().context("monkey id")?
} else {
return Err(Error::msg("canot find monkey id string"));
};
let maybe_items = if let ["Starting items", items_str] =
lines[1].split(":").collect::<Vec<_>>().as_slice()
{
items_str
.split(", ")
.map(|el| el.trim().parse::<isize>())
.collect::<Vec<_>>()
} else {
return Err(Error::msg("canot find items line"));
};
if maybe_items
.iter()
.any(|el| if let Err(_) = el { true } else { false })
{
return Err(Error::msg("cannot parse items line"));
}
// The next line can never panic.
let items = maybe_items
.into_iter()
.map(|el| el.unwrap())
.collect::<Vec<_>>();
let op = if let ["Operation: new ", op_str] =
lines[2].split("=").collect::<Vec<_>>().as_slice()
{
match op_str.split_whitespace().collect::<Vec<_>>().as_slice() {
["old", "*", "old"] => Ok(QuadraticOp {
a: 1,
b: 0,
c: 0,
prod: None,
}),
["old", "*", num] => Ok(QuadraticOp {
a: 0,
b: num.parse().context("multiplier")?,
c: 0,
prod: None,
}),
["old", "+", num] => Ok(QuadraticOp {
a: 0,
b: 1,
c: num.parse().context("adder")?,
prod: None,
}),
_ => Err(Error::msg("cannot build op")),
}
} else {
return Err(Error::msg("cannot find operations line"));
}
.context("operation")?;
let div_val = if let ["Test:", "divisible", "by", val] =
lines[3].split_whitespace().collect::<Vec<_>>().as_slice()
{
val.parse().context("divisibility")?
} else {
return Err(Error::msg("cannot find divisibility line"));
};
let true_monkey = if let ["If", "true:", "throw", "to", "monkey", val] =
lines[4].split_whitespace().collect::<Vec<_>>().as_slice()
{
val.parse().context("true monkey")?
} else {
return Err(Error::msg("cannot find true monkey line"));
};
let false_monkey = if let ["If", "false:", "throw", "to", "monkey", val] =
lines[5].split_whitespace().collect::<Vec<_>>().as_slice()
{
val.parse().context("false monkey")?
} else {
return Err(Error::msg("cannot find false monkey line"));
};
if idx == true_monkey || idx == false_monkey {
return Err(Error::msg("trying to toss to myself"));
}
if !is_prime(div_val) {
return Err(Error::msg("div val is no prime"));
}
let test = DivisitilibytTest {
div_val,
true_monkey,
false_monkey,
};
let activity = 0;
Ok(Self {
idx,
activity,
items,
op,
test,
})
}
}
// This is a quick check for being prime.
fn is_prime(val: isize) -> bool {
for i in 2..val {
if val % i == 0 {
return false;
}
}
true
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
No time to write much. This might be the most complex puzzle this year so far.
Part 1 can be solved by implementing A*
on a directed acyclic graph where each
step has unit cost.
Then, have it search for the path from start to finish, making sure to have a
sane heuristic.
With a directed acyclic graph, part 2 can be solved very easily by adding a new,
virtual node and connecting it to all nodes of height a
.
Then, search for the shortest path from that virtual node to the end and
subtract 1 from the length of the path (the virtual node).
main.rs
use anyhow::{Error, Result};
use std::collections::{HashMap, HashSet};
// Constants.
fn build_graph(map: &HashMap<data::Point, data::Height>) -> HashSet<data::Node> {
map.iter()
.map(|(&p, &h)| {
let neighbours = p
.env()
.into_iter()
.filter_map(|el| match map.get(&el) {
Some(other_h) => {
if other_h.height() <= h.height() + 1 {
Some(el)
} else {
None
}
}
None => None,
})
.collect::<HashSet<_>>();
data::Node::new(p, h, neighbours)
})
.collect::<HashSet<data::Node>>()
}
fn find_path<'a>(
start: &'a data::Node,
end: &'a data::Node,
graph: &'a HashSet<data::Node>,
estimator_fn: fn(&data::Node, &data::Point) -> usize,
) -> Result<HashMap<&'a data::Node, (Option<data::Point>, usize)>> {
let ref_point = end.pos();
let estimator = move |node: &data::Node| estimator_fn(node, &ref_point);
let get_node = move |node: &data::Point| {
graph
.get(&node.as_node())
.ok_or(Error::msg(format!("cannot get node {:?}", node)))
};
let mut connections = HashMap::<&'a data::Node, (Option<data::Point>, usize)>::new();
let mut checkable = HashMap::<&data::Node, (data::Point, usize)>::new();
// Add starting point to resulting path.
connections.insert(&start, (None, 0));
// Add neighbours of starting point to list of checkable values.
for neigh in start.neighbours() {
let neigh_node = get_node(neigh)?;
// Estimated costs are the most direct possible connection plus 1, since every step costs
// one.
checkable.insert(neigh_node, (start.pos(), estimator(neigh_node)));
// connections.insert(neigh_node, (Some(start.pos()), 1));
}
// Search until we added the final node to the path or until there is nothing more to check.
while !connections.contains_key(&end) && checkable.len() > 0 {
// Get node with minimum _estimated_ cost.
let next_best_node = checkable
.iter_mut()
// Get node with minimum estimated cost.
.min_by(|(_node1, (_pre1, cost1)), (_node2, (_pre2, cost2))| cost1.cmp(&cost2))
.ok_or(Error::msg("cannot find next node"))?
.0
.clone();
let (_, (predecessor, _old_estimate)) = checkable
.remove_entry(next_best_node)
.ok_or(Error::msg("cannot find predecessor"))?;
let cost_of_predecessor = connections
.get(&predecessor.as_node())
.ok_or(Error::msg("predecessor has not been visited"))?
.1;
// Add point to resulting path.
connections.insert(next_best_node, (Some(predecessor), cost_of_predecessor + 1));
// Add neighbours of point to list of checkable values.
for neigh in next_best_node.neighbours() {
let neigh_node = get_node(neigh)?;
if !connections.contains_key(neigh_node) {
let estimate = cost_of_predecessor + estimator(neigh_node);
let previous_best = checkable
.get(neigh_node)
.unwrap_or(&(*neigh, std::usize::MAX))
.1;
if previous_best > estimate {
checkable.insert(neigh_node, (next_best_node.pos(), estimate));
}
// connections.insert(neigh_node, Some(start.pos()));
}
}
}
Ok(connections)
}
fn extract_shortest_path<'a>(
end: &'a data::Node,
points: HashMap<&'a data::Node, (Option<data::Point>, usize)>,
graph: &'a HashSet<data::Node>,
) -> Result<Vec<&'a data::Node>> {
let mut path = vec![end];
let mut check_node = end;
while let Some((Some(pre), _)) = points.get(check_node) {
let node = graph
.get(&pre.as_node())
.ok_or(Error::msg("cannot find node in graph"))?;
path.push(node);
check_node = node;
}
Ok(path)
}
fn solve(file: &str) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let height_map = io::parse_chars_to_data::<data::Height>(file, "vec")?;
// Create graph. To avoid self-referencing data types, we identify each node only by its
// position.
let mut graph = build_graph(&height_map);
// Find start and end nodes. Also adjust heights to use actual values. This is a bit of a pain
// in rust...
let mut start_node = graph
.iter()
.find(|&el| el.get_height() == data::Height::Start)
.ok_or(Error::msg("cannot find start node"))?
.clone();
start_node.set_height(data::Height::Normal(0));
graph
.replace(start_node.clone())
.ok_or(Error::msg("cannot replace end node"))?;
let mut end_node = graph
.iter()
.find(|&el| el.get_height() == data::Height::End)
.ok_or(Error::msg("cannot find end node"))?
.clone();
end_node.set_height(data::Height::Normal(25));
graph
.replace(end_node.clone())
.ok_or(Error::msg("cannot replace end node"))?;
let estimator = |node: &data::Node, ref_point: &data::Point| node.infinity_dist(&ref_point);
let path = find_path(&start_node, &end_node, &graph, estimator)
.map(|el| extract_shortest_path(&end_node, el, &graph))??;
println!("part 1: {}", path.len() - 1);
// Part 2.
// Add an additional node at a position that hadn't yet been part of the graph and connect it
// to all nodes of zero elevation.
let possible_starts = graph
.iter()
.filter(|node| node.get_height().height() == 0)
.map(|node| node.pos())
.collect::<HashSet<_>>();
let far_away_fake_node = data::Node::new(
data::Point { x: -10, y: -10 },
data::Height::Normal(0),
possible_starts,
);
if !graph.insert(far_away_fake_node) {
return Err(Error::msg(
"refusing to overwrite existing node for fake nod",
));
}
let path2 = find_path(&start_node, &end_node, &graph, estimator)
.map(|el| extract_shortest_path(&end_node, el, &graph))??;
// We have to ignore the first two steps here because of the real and fake start nodes.
println!("part 2: {}", path2.len() - 2);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
use std::hash::{Hash, Hasher};
use std::str::FromStr;
#[derive(Debug, Hash, Eq, PartialEq, Copy, Clone)]
pub struct Point {
pub x: isize,
pub y: isize,
}
impl Point {
pub fn new(x: isize, y: isize) -> Self {
Self { x, y }
}
pub fn env(&self) -> Vec<Self> {
vec![
Self {
x: self.x - 1,
y: self.y,
},
Self {
x: self.x + 1,
y: self.y,
},
Self {
x: self.x,
y: self.y - 1,
},
Self {
x: self.x,
y: self.y + 1,
},
]
}
pub fn as_node(&self) -> Node {
Node {
p: *self,
h: Height::End,
neighbours: HashSet::<Point>::new(),
}
}
}
#[derive(Copy, Clone, Debug, Hash, PartialEq)]
pub enum Height {
Normal(usize),
Start,
End,
}
impl Height {
pub fn height(&self) -> usize {
match self {
&Self::End => 25,
&Self::Start => 0,
&Self::Normal(val) => val,
}
}
}
// This one is not pretty but it works and correctly reports errors. More context can always be
// added if there are unexpected errors.
impl FromStr for Height {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
if s.len() == 1 {
Ok(match s {
"S" => Self::Start,
"E" => Self::End,
// The next line can never panic.
_ => Self::Normal(s.chars().next().unwrap() as usize - 'a' as usize),
})
} else {
Err(Error::msg("received several characters or none"))
}
}
}
#[derive(Debug, Clone)]
pub struct Node {
p: Point,
h: Height,
neighbours: HashSet<Point>,
}
impl Node {
pub fn pos(&self) -> Point {
self.p
}
pub fn neighbours<'a>(&'a self) -> &'a HashSet<Point> {
&self.neighbours
}
pub fn new(p: Point, h: Height, neighbours: HashSet<Point>) -> Self {
Self { p, h, neighbours }
}
pub fn get_height(&self) -> Height {
self.h
}
pub fn set_height(&mut self, height: Height) {
self.h = height;
}
pub fn infinity_dist(&self, other: &Point) -> usize {
((self.p.x - other.x).abs() + (self.p.y - other.y).abs()) as usize
}
}
// We identify a node only by its position and never by its associated height.
impl Hash for Node {
fn hash<H: Hasher>(&self, state: &mut H) {
self.p.hash(state)
}
}
impl PartialEq for Node {
fn eq(&self, other: &Self) -> bool {
self.p == other.p
}
}
impl Eq for Node {}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Input parsing was a pain. The idea I used is to split each package at the given commas, but to only use those commas that are at the correct nesting level. Then, the parser is called again for each individual entry. Special care has to be taken since there are empty lists, too.
This time, I did not try to get around self-referencing data types, which means
I got to use Box<T>
for the first time.
It was nicer than expected.
It’s good to know that Rust will ensure memory safety also for data on the heap.
Part 1 was straightforward once the inputs had been parsed.
Luckily, I had decided to create compare
methods for each separate data type
(Pkg
and Elem
) that returned a triplet indicating the ordering.
Thus, in part 2, I could use the very same comparison methods to have Rust sort
the vector of packges.
In order to find out the indices of the divider packages but avoid implementing a comparison operation for packages, I sorted a vector of tuples that contained a package and a marker. Once sorted, I could just retrieve the indices of the markers.
main.rs
use anyhow::{Error, Result};
// Constants.
fn solve(file: &str) -> Result<()> {
eprintln!("PROCESSING {}", file);
// Read file and convert into data.
let pairs = io::parse_chunks_to_data::<data::Input>(
io::read_lines_from_file(file, 3)?,
"package",
None,
None,
)?;
// Part 1.
let ordered_correctly = pairs
.iter()
.enumerate()
.filter_map(|(idx, el)| {
if el.is_ordered_correctly() {
Some(idx + 1)
} else {
None
}
})
.sum::<usize>();
println!("correctly ordered: {}", ordered_correctly);
// Part 2.
// Create divider packages. We wrote a parser so we might as well use it for easy creation of
// the divider packages.
let div1 = "[[2]]".parse::<data::Pkg>()?;
let div2 = "[[6]]".parse::<data::Pkg>()?;
// A value of "true" means this package is a marker.
let mut all_pkgs = vec![(true, &div1), (true, &div2)];
all_pkgs.extend(
pairs
.iter()
.map(|el| vec![(false, &el.left), (false, &el.right)].into_iter())
.flatten(),
);
// Sort using the comparison method created for part 1.
all_pkgs.sort_by(|(_marker1, pkg1), (_marker2, pkg2)| pkg1.compare(pkg2));
let decoder_key = all_pkgs
.iter()
.enumerate()
.filter(|(_idx, (marker, _pkg))| *marker)
.map(|(idx, (_marker, _pkg))| idx + 1)
.product::<usize>();
println!("decoder key is {}", decoder_key);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::cmp::Ordering;
use std::str::FromStr;
#[derive(Debug)]
pub struct Input {
pub left: Pkg,
pub right: Pkg,
}
#[derive(Debug)]
pub struct Pkg(Vec<Elem>);
#[derive(Debug)]
pub enum Elem {
Num(isize),
Dat(Box<Pkg>),
}
impl Input {
pub fn is_ordered_correctly(&self) -> bool {
self.left.compare(&self.right) == Ordering::Less
}
}
impl Pkg {
// This is run on the left value with the right value as argument.
pub fn compare(&self, other: &Self) -> Ordering {
// The zip operator will end the iteration as soon as one of the two iterators runs out.
for (left, right) in self.0.iter().zip(other.0.iter()) {
match left.compare(&right) {
Ordering::Equal => {}
ord @ Ordering::Less | ord @ Ordering::Greater => return ord,
}
}
// If we reach here, all value comparisons turned out equal so far. Thus, perform the
// length comparison.
self.0.len().cmp(&other.0.len())
}
}
impl Elem {
fn compare(&self, other: &Self) -> Ordering {
match (self, other) {
// If both are numbers, compare the numbers.
(&Elem::Num(left), &Elem::Num(right)) => left.cmp(&right),
// If both are lists, compare the lists.
(&Elem::Dat(ref left), &Elem::Dat(ref right)) => left.compare(&right),
// If one is a number and the other one is a list, wrap the number in the list and
// comapre again.
(&Elem::Num(left), &Elem::Dat(ref right)) => Pkg(vec![Elem::Num(left)]).compare(&right),
(&Elem::Dat(ref left), &Elem::Num(right)) => left.compare(&Pkg(vec![Elem::Num(right)])),
}
}
}
impl FromStr for Input {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let mut lines = s.split("\n");
let left = lines
.next()
.ok_or(Error::msg("cannot find left line"))?
.parse::<Pkg>()?;
let right = lines
.next()
.ok_or(Error::msg("cannot find right line"))?
.parse::<Pkg>()?;
Ok(Self { left, right })
}
}
// This one is not pretty but it works and correctly reports errors.
impl FromStr for Pkg {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let mut nesting_level: usize = 0;
let mut chars = String::new();
if !s.starts_with("[") || !s.ends_with("]") {
return Err(Error::msg("string is no real package"));
}
let mut elems_at_level = vec![];
// Skip the opening bracket and extract all elements at this level of the hierarchy. This
// is not pretty but seems to work.
for char in s[1..s.len()].chars() {
let val = match char {
'[' => {
// Since we skip the very first "[", this indicates the start of a nested list.
chars.push(char);
nesting_level += 1;
None
}
']' => {
if nesting_level == 0 {
// Emit what we found so far. This will be the very last element. We use
// this closing bracket to ensure we do emit the very last value.
let result = chars;
chars = String::new();
Some(result)
} else {
// We found the end of a nested list. Remember the current character.
chars.push(char);
nesting_level -= 1;
None
}
}
',' => {
if nesting_level == 0 {
// Emit one value. This is one element of the list at this level.
let result = chars;
chars = String::new();
Some(result)
} else {
// We are still not at the top nesting level.
chars.push(char);
None
}
}
_ => {
// Remember all other characters.
chars.push(char);
None
}
};
if let Some(entry) = val {
elems_at_level.push(entry);
}
}
// Parse all entries at this level of the hierarchy into "Elem"s. Errors are being handled
// further down.
let maybe_parsed_elems = elems_at_level
.into_iter()
.map(|el| el.parse::<Elem>())
.collect::<Vec<_>>();
let mut has_err = false;
for el in &maybe_parsed_elems {
if let Err(err) = el {
has_err = true;
eprintln!("{:?}", err);
}
}
if has_err {
Err(Error::msg("cannot parse package"))
} else {
let parsed_elems = maybe_parsed_elems
.into_iter()
// This line can never panic.
.map(|el| el.unwrap())
.collect::<Vec<_>>();
Ok(Self(parsed_elems))
}
}
}
impl FromStr for Elem {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
if s.len() == 0 {
// Empty package.
Ok(Self::Dat(Box::new(Pkg(vec![]))))
} else if s.starts_with("[") {
// This is itself a non-empty package.
Ok(Self::Dat(Box::new(s.parse::<Pkg>()?)))
} else {
// Otherwise, this is a number.
Ok(Self::Num(s.parse::<isize>()?))
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Today’s puzzle was rather straightforward.
Just follow each grain of sand on its way down.
I tracked positions of sand that settled via a HashSet
to be able to check
whether a position has been occupied very easily.
For the rocks, I went a different route, though.
Expecting some twist in part 2 that would increase the size of the playing field
a lot, I did not fill the positions of rocks into an occupation map.
Instead, I remembered them as one-dimensional ranges and checked whether a point
was on that range.
I have no clue whether that provides better performance than using a set.
Today, I learned that the difference between a cargo run
and a cargo run
--release
in terms of runtime can be a factor of 55!
I also wrote a stepwise visualiser.
To use it, set the env var RENDER
to 1
and select the part of the puzzle you
want to run with the env var RUN
.
Set the env var RUN
to 0, 1, 2, or 3 for the sample part 1, the real puzzle
part 1, the sample part 2, or the real puzzle part 2.
After printing the final resting place of one grain of sand, you have to hit
return to resume.
To get a nicely rendered view, pipe the output of yes
into the executable.
Warning: This will tax your CPU quite significantly.
Only run in release more for RUN==3
.
main.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
// Constants.
const SOURCE: data::Point = data::Point { x: 500, y: 0 };
fn is_blocked(
p: &data::Point,
rocks: &Vec<data::Rocks>,
sands: &HashSet<data::Point>,
max_y: Option<isize>,
) -> bool {
if let Some(_) = sands.get(p) {
true
} else if let Some(bottom) = max_y && p.y >= bottom {
true
} else {
rocks.iter().any(|el| el.contains(*p))
}
}
fn render(
rocks: &Vec<data::Rocks>,
sands: &HashSet<data::Point>,
fov: (isize, isize, isize, isize),
) -> String {
let dist = 10;
let mut image = String::new();
let empty = HashSet::<data::Point>::new();
let min_y = if fov.1 - 4 <= -2 { fov.1 - 2 } else { -2 };
for y in min_y..fov.3 + 4 {
for x in fov.0 - dist..fov.2 + dist {
let p = data::Point { x, y };
// This point is a rock.
let char = if p == SOURCE {
'S'
} else if p.y == fov.3 + 2 {
'#'
} else if is_blocked(&p, rocks, &empty, None) {
'#'
// This point is sand.
} else if is_blocked(&p, rocks, sands, None) {
'.'
// This point is free.
} else {
' '
};
image.push(char);
}
image.push('\n');
}
image
}
fn is_env(var: &str, val: &str, def: &str) -> bool {
std::env::var(var).unwrap_or(def.to_string()) == val
}
fn solve(file: &str, part1: bool) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
let rocks = io::parse_chunks_to_data::<data::Rocks>(
io::read_lines_from_file(file, 1)?,
"rocks",
None,
None,
)?;
// Coordinates for rendering. This is just lazy copy-pasting.
let min_x_rocks = rocks
.iter()
.map(|el| el.edges.iter())
.flatten()
.map(|el| el.x)
.min()
.ok_or(Error::msg("cannot find xmin"))?;
let min_y_rocks = rocks
.iter()
.map(|el| el.edges.iter())
.flatten()
.map(|el| el.y)
.min()
.ok_or(Error::msg("cannot find ymin"))?;
let max_x_rocks = rocks
.iter()
.map(|el| el.edges.iter())
.flatten()
.map(|el| el.x)
.max()
.ok_or(Error::msg("cannot find xmax"))?;
let max_y_rocks = rocks
.iter()
.map(|el| el.edges.iter())
.flatten()
.map(|el| el.y)
.max()
.ok_or(Error::msg("cannot find ymax"))?;
let render_coords = (min_x_rocks, min_y_rocks, max_x_rocks, max_y_rocks);
let mut highest_y = SOURCE.y;
let max_y = if part1 { max_y_rocks } else { max_y_rocks + 2 };
let mut sands = HashSet::<data::Point>::new();
let blocked_y = if part1 { None } else { Some(max_y) };
let do_render = is_env("RENDER", "1", "0");
// If this condition is no longer fulfilled, a piece of sand has exceeded our world and will
// fall to infinity.
loop {
// Spawn new sand.
if do_render {
// Clear screen and print view.
println!("\x1bc{}", render(&rocks, &sands, render_coords));
// Only continue after the user confirmed.
std::io::stdin().lines().next();
}
let mut sand = SOURCE;
let mut has_settled = false;
while !has_settled && sand.y <= max_y {
// Find the highest point that contains either sand or is a rock.
let mut next = sand.down();
while !is_blocked(&next, &rocks, &sands, blocked_y) && sand.y <= max_y {
sand = next;
next = sand.down()
}
// If we reach here, that piece of sand has hit an occupied tile on its way down.
// We don't check again whether we exceeded our world because that will be checked the
// nxt time we reach the top of the while loop.
//
// Check down to the left first.
next = sand.left_down();
if !is_blocked(&next, &rocks, &sands, blocked_y) {
sand = next;
continue;
}
// Check down to the right next.
next = sand.right_down();
if !is_blocked(&next, &rocks, &sands, blocked_y) {
sand = next;
continue;
}
has_settled = true;
sands.insert(sand);
}
if sand.y > highest_y {
highest_y = sand.y;
}
if part1 {
if highest_y >= max_y {
break;
}
} else {
// Break if the source has been blocked.
if let Some(_) = sands.get(&SOURCE) {
break;
}
}
}
let mut sorted = Vec::<data::Point>::from_iter(sands.into_iter());
sorted.sort_by(|el1, el2| {
let x_cmp = el1.x.cmp(&el2.x);
if x_cmp == std::cmp::Ordering::Equal {
el1.y.cmp(&el2.y)
} else {
x_cmp
}
});
println!("amount of sand is {:?}", sorted.len());
Ok(())
}
fn main() -> Result<()> {
// Run all by default or if a specific one was chosen. Useful for rendering just one.
if is_env("RUN", "0", "0") {
solve(SAMPLE1, true)?;
}
if is_env("RUN", "1", "1") {
solve(REAL, true)?;
}
if is_env("RUN", "2", "2") {
solve(SAMPLE1, false)?;
}
if is_env("RUN", "3", "3") {
solve(REAL, false)?;
}
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::hash::Hash;
use std::str::FromStr;
#[derive(Debug, Hash, Eq, PartialEq, Copy, Clone)]
pub struct Point {
pub x: isize,
pub y: isize,
}
#[derive(Debug)]
pub struct Rocks {
pub edges: Vec<Point>,
}
impl Point {
fn dist(&self, other: &Self) -> Self {
Self {
x: other.x - self.x,
y: other.y - self.y,
}
}
fn contains(&self, other: &Self) -> bool {
if other.x == 0 && other.y == 0 {
true
} else if self.x != 0 {
other.y == 0
&& self.x.clamp(-1, 1) == other.x.clamp(-1, 1)
&& self.x.abs() >= other.x.abs()
} else {
other.x == 0
&& self.y.clamp(-1, 1) == other.y.clamp(-1, 1)
&& self.y.abs() >= other.y.abs()
}
}
pub fn down(&self) -> Self {
Self {
x: self.x,
y: self.y + 1,
}
}
pub fn left_down(&self) -> Self {
Self {
x: self.x - 1,
y: self.y + 1,
}
}
pub fn right_down(&self) -> Self {
Self {
x: self.x + 1,
y: self.y + 1,
}
}
}
impl Rocks {
pub fn contains(&self, p: Point) -> bool {
for (left, right) in self.edges.iter().zip(self.edges.iter().skip(1)) {
let edge_diff = right.dist(left);
let point_diff = p.dist(left);
if edge_diff.contains(&point_diff) {
return true;
}
}
false
}
}
impl FromStr for Point {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split(",").collect::<Vec<_>>().as_slice() {
[x, y] => Ok(Self {
x: x.parse()?,
y: y.parse()?,
}),
_ => Err(Error::msg("cannot parse point")),
}
}
}
impl FromStr for Rocks {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let maybe_edges = s
.split(" -> ")
.map(|el| el.parse::<Point>())
.collect::<Vec<_>>();
let mut has_err = false;
for edge in &maybe_edges {
if let Err(err) = edge {
eprintln!("{:?}", err);
has_err = true;
}
}
if has_err {
Err(Error::msg("cannot parse as edges"))
} else {
let edges = maybe_edges
.into_iter()
.map(|el| el.unwrap())
.collect::<Vec<_>>();
// Check whether edges go diagonally.
for (left, right) in edges.iter().zip(edges.iter().skip(1)) {
let edge_diff = right.dist(left);
if edge_diff.x != 0 && edge_diff.y != 0 {
return Err(Error::msg("found an edge that isn't straight"));
}
}
Ok(Self { edges })
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute the command
cargo run --release
to run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Part 1 was easy enough, I simply gave each exclusion zone a function that provided all points within it at a certain y-coordinate, fed all of them into a set, and then checked how many elements it had. Not very pretty but worked well enough.
At first, I brute-forced part 2 and got lucky. However, that solution wasn’t satisfying and then I found some more time to look into this today. Yeah!
The final solution to part 2 turned out to be one that I never expected to perform well but that did, as it turns out. What I do is I basically perform a line scan along the x axis for all y coordinates. For each y, I extract all non-empty overlaps between exclusion zones and the scanning line and discarded those that were outside the playing field. Then, I merged those overlaps.
To make merging overlaps, which are ranges, very easy, I sorted all of then first by their x_min coordinate and, if those are equal, by their x_max coordinate. I never thought that sorting them would be efficient enough, but it is. Once they have been sorted, merging them is easy. If x_min of the first entry is not 0, we’ve found the coordinates. Otherwise, we check whether x_min of the next range in line is greater than the current x_max. If so, we’ve found the empty spot. If not, we increase our currently known x_max to the x_max of that range. Once the known x_max reached 4,000,000, we know that this line doesn’t contain the beacon. Rinse repeat for every y until the distress beacon has been found.
main.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
// Constants.
fn find_missing_x(
diamonds: &Vec<data::Diamond>,
outer_min: isize,
outer_max: isize,
y: isize,
) -> Option<(isize, isize)> {
// Extract all ranges at the given y-coordinate first.
let mut ranges = diamonds
.iter()
.map(|el| el.xrange_at_y(&y).clamp(outer_min, outer_max))
.filter(|el| el != &data::NULL_RANGE)
.collect::<Vec<_>>();
// Then, we sort by left coordinate first and by right coordinate second. That makes finding
// the missing x spot trivial.
ranges.sort_by(|range1, range2| {
let left_cmp = range1.left.cmp(&range2.left);
if left_cmp == std::cmp::Ordering::Equal {
range1.right.cmp(&range2.right)
} else {
left_cmp
}
});
let mut max = ranges[0].right;
if ranges[0].left != outer_min {
// Wouldn't that be nice.
Some((outer_min, y))
} else {
for range in ranges[1..].into_iter() {
// We can never find a left coordinate that is smaller than what we already have.
if range.left > max {
// Yeah, found it!
return Some((max, y));
} else if range.right > max {
max = range.right;
} else if range.right <= max {
// Don't do anything here.
} else {
unreachable!("there are no more conditions");
}
}
None
}
}
fn solve(file: &str, y: isize, (min, max): (isize, isize)) -> Result<()> {
println!("PROCESSING {} WITH Y {}", file, y);
// Read file and convert into data.
let exclusion_zones = io::parse_chunks_to_data::<data::Diamond>(
io::read_lines_from_file(file, 1)?,
"diamonds",
None,
None,
)?;
println!("number of diamons is {}", exclusion_zones.len());
let beacons = exclusion_zones
.iter()
.map(|el| (el.bx, el.by))
.collect::<HashSet<_>>();
let sensors = exclusion_zones
.iter()
.map(|el| (el.x, el.y))
.collect::<HashSet<_>>();
let objects = &beacons | &sensors;
// Part 1.
let count = exclusion_zones
.iter()
.map(|el| el.xs_at_y(&y))
.flatten()
// This is a lazy filter to detect clashes with existing objects but I didn't want to put
// too much effort into part 1.
.filter(|el| !objects.contains(&(*el, y)))
.collect::<HashSet<_>>()
.len();
println!("number of points along {} is {}", y, count);
// Part 2.
// There is guaranteed to be exactly one point.
let missing = (min..max + 1).find_map(|y| find_missing_x(&exclusion_zones, min, max + 1, y));
if let Some((x, y)) = missing {
println!("tuning frequency is {}", x * max + y);
Ok(())
} else {
Err(Error::msg("there is no distress beacon"))
}
}
fn main() -> Result<()> {
solve(SAMPLE1, 10, (0, 20))?;
solve(REAL, 2_000_000, (0, 4_000_000))?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug, Eq, PartialEq, Copy, Clone)]
pub struct Diamond {
pub x: isize,
pub y: isize,
pub bx: isize,
pub by: isize,
pub dist: isize,
}
#[derive(Debug, Eq, PartialEq)]
pub struct Range {
pub left: isize,
pub right: isize,
}
impl Range {
pub fn clamp(&self, min: isize, max: isize) -> Range {
if self.right <= min || self.left >= max {
NULL_RANGE
} else {
Range {
left: self.left.clamp(min, max),
right: self.right.clamp(min, max),
}
}
}
}
pub const NULL_RANGE: Range = Range { left: 0, right: 0 };
impl FromStr for Diamond {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split_whitespace().collect::<Vec<_>>().as_slice() {
["Sensor", "at", sensor_x, sensor_y, "closest", "beacon", "is", "at", beacon_x, beacon_y] =>
{
if !sensor_x.starts_with("x=") || !sensor_y.starts_with("y=") {
return Err(Error::msg("malformed sensor coordinates"));
}
if !beacon_x.starts_with("x=") || !beacon_y.starts_with("y=") {
return Err(Error::msg("malformed beacon coordinates"));
}
let conv = |val: &str| {
val.trim_start_matches("x=")
.trim_start_matches("y=")
.trim_end_matches(",")
.trim_end_matches(":")
.parse::<isize>()
};
let x = conv(sensor_x)?;
let y = conv(sensor_y)?;
let bx = conv(beacon_x)?;
let by = conv(beacon_y)?;
let dist = (bx - x).abs() + (by - y).abs();
if dist > 0 {
Ok(Self { x, y, bx, by, dist })
} else {
Err(Error::msg("negative distance encountered"))
}
}
_ => Err(Error::msg("cannot parse point")),
}
}
}
impl Diamond {
pub fn xs_at_y(&self, y: &isize) -> std::ops::Range<isize> {
let dist = (y - self.y).abs();
if dist <= self.dist {
let remaining = self.dist - dist;
self.x - remaining..self.x + remaining + 1
} else {
0..0
}
}
pub fn xrange_at_y(&self, y: &isize) -> Range {
let dist = (y - self.y).abs();
if dist <= self.dist {
let remaining = self.dist - dist;
Range {
left: self.x - remaining,
right: self.x + remaining + 1,
}
} else {
NULL_RANGE
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute the command
cargo run --release
to run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Today’s part 2 was pretty tricky and I guess rust’s efficiency kind of saved me. I actually assumed something completely different, but luckily I refrained from overcomplicating part 1 because of that.
Part 1 was a straightforward backtracking problem. A stupid mistake caused me to lose quite some time, though, which meant I had a lot less time for part 2. The main idea I had was not to simulate the passing of time but instead have each valve contribute directly to the overall release value because we know for how long it will be opened. Furthermore, I computed pairwise distances before running any backtracking and ignored all those valves that had a zero release rate. Pairwise distances can be computed easily by bubbling a nearest neighbour sphere outwards.
I still solved part 2 like a backtracking problem. The added complexity comes from the fact that the elephant can also act.
I modelled the world in a way that had the human act first on a specific number of valves. Then time is reset, and the elephant acts on as many valves at it likes from among those valves that the human hasn’t acted on, yet, until there are no more valves left or time runs out. I then decrease the number of valves the human acts on in steps of 1 starting at the maximum possible number and take the overall highest release value.
I added a quick stopping condition, which worked for my inputs but might not
translate.
Set the env var QUICK=0
to disable quick mode, which is enabled by default.
Quick mode assumes that, as the number of valves opened by the human decreases,
the best possible release value will only ever increase.
With quick mode enabled, this runs in <20s on my >10year old business notebook.
Without quick mode, it takes <3min.
main.rs
use anyhow::{Error, Result};
use std::collections::{HashMap, HashSet};
// Constants.
// We map each set of two chars to one usize. Since there are 26 letters in the alphabet, we map
// each pair to 100*ord(first) + ord(second) where ord("A")==0 and ord("Z")==25. Thus, "AA" maps to
// zero.
const START: u8 = 0;
fn pairwise_distances(valve_map: &HashMap<u8, &data::Valve>) -> HashMap<(u8, u8), usize> {
let mut result = HashMap::<(u8, u8), usize>::new();
for (name, valve) in valve_map {
let mut curr_dist = 1 as usize;
// All neighbours have a distance of one.
let mut distances = valve
.neighbours
.iter()
.map(|el| (el.clone(), curr_dist))
.collect::<HashMap<_, _>>();
// We always remeber new points we added. Those points have a distance of one more than the
// previous max.
let mut new_valves = distances
.iter()
.map(|el| el.0.clone())
.collect::<HashSet<_>>();
// We include the point itself for ease of use later on.
distances.insert(name.clone(), 0);
// Repeat until no more neighbours have been added.
while new_valves.len() != 0 {
curr_dist += 1;
// Determine those valve names that are at the next distance.
new_valves = new_valves
.iter()
// Extract the actual valve data.
.filter_map(|el| valve_map.get(el))
// Get all neighbours of those valves we just extracted in one big iterator.
.map(|el| el.neighbours.iter())
.flatten()
// Get a unique set of the names of those valves that may have been added in this
// iteration.
.collect::<HashSet<_>>()
.into_iter()
// Keep only those to which we haven't yet computed distances.
.filter_map(|el| {
if let Some(_) = distances.get(el) {
None
} else {
Some(el.clone())
}
})
.collect::<HashSet<_>>();
// Add those distances.
distances.extend(new_valves.iter().map(|el| (el.clone(), curr_dist)));
}
result.extend(
distances
.into_iter()
.map(|(new_valve, dist)| ((name.clone(), new_valve), dist)),
);
}
result
}
fn backtrack(
valve_name_map: &HashMap<u8, &data::Valve>,
distances: &HashMap<(u8, u8), usize>,
relevant_valves: &Vec<Option<u8>>,
current_spot: &u8,
current_time: usize,
max_time: usize,
current_best: usize,
visited: &mut HashSet<u8>,
allow_elephant: bool,
elephant_depth: usize,
) -> Result<usize> {
// Check that None is the first entry in the list of relevant_valves.
if allow_elephant && !relevant_valves[0].as_ref().is_none() {
return Err(Error::msg("the first relevant valve has to be None"));
}
let mut next_best = current_best;
for maybe_next_spot in relevant_valves.iter() {
let possible_best = if let Some(next_spot) = maybe_next_spot {
// Let the human do their thing.
// Skip spots we've already seen.
if visited.contains(next_spot) {
continue;
}
let time_spent_traveling = distances
.get(&(current_spot.clone(), next_spot.clone()))
.ok_or(Error::msg("cannot retrieve distance"))?;
// We add 1 because of the time it takes to open the valve.
let time_of_next_valve_opening = current_time + time_spent_traveling + 1;
// Check whether we have any time left to open another valve.
if time_of_next_valve_opening >= max_time {
// If not, we found the best we can using this route. Skip this spot.
continue;
} else {
// We will be able to open that valve.
// Remember that we visited it.
visited.insert(next_spot.clone());
// Check by how much the next valve can increase our release value.
let next_valve_rate = valve_name_map
.get(next_spot)
.ok_or(Error::msg("cannot retrieve valve"))?
.rate;
let benefit = (max_time - time_of_next_valve_opening) * next_valve_rate;
backtrack(
valve_name_map,
distances,
relevant_valves,
&next_spot,
time_of_next_valve_opening,
max_time,
current_best + benefit,
visited,
allow_elephant,
elephant_depth,
)?
}
} else if allow_elephant && visited.len() == elephant_depth {
// Let the elephant also do its thing, allowing it only to visit those points that we
// haven't yet visited. Actually, this has to be the first step we do for the algorithm
// to work. Otherwise, better paths will be missed.
backtrack(
valve_name_map,
distances,
relevant_valves,
// The elephant starts at the start.
&START,
// Time is reset.
0,
max_time,
// We will still have acted, meaning the elephant can only ever increase the
// release value.
next_best,
// The elephant may only visit those valves that we haven't yet visited.
visited,
// Don't allow yet another elephant to explore.
false,
elephant_depth,
)?
} else {
// This block allows us to still run part 1 despite all the code related to elephants.
0
};
if possible_best > next_best {
next_best = possible_best;
}
// Forget that we visited the point but only if this isn't the elephant's turn.
if let Some(next_spot) = maybe_next_spot {
visited.remove(next_spot);
}
}
// Return the bext one we found.
Ok(next_best)
}
fn solve(file: &str) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
let valves = io::parse_chunks_to_data::<data::Valve>(
io::read_lines_from_file(file, 1)?,
"valves",
None,
None,
)?;
// We only want to look at valves that have non-zero rates for part 1. I suspect it's gonna be
// different for part 2. Nope, it's not.
let relevant_valves = valves
.iter()
.filter_map(|el| {
if el.rate > 0 {
Some(el.name.clone())
} else {
None
}
})
.map(|el| Some(el))
.collect::<Vec<_>>();
// Compute pairwise distances.
let valve_name_map = valves
.iter()
.map(|el| (el.name.clone(), el))
.collect::<HashMap<_, _>>();
let distances = pairwise_distances(&valve_name_map);
// Backtrack the solution.
let start_time = 0;
let max_time = 30;
let start_release = 0;
let mut visited = HashSet::<_>::with_capacity(relevant_valves.len());
let max_part1 = backtrack(
&valve_name_map,
&distances,
&relevant_valves,
&START,
start_time,
max_time,
start_release,
&mut visited,
false,
0,
)?;
println!("part 1: {}", max_part1);
// Part 2.
// Backtrack the solution.
let max_time_part2 = 26;
// A value of None means that the human should stop what they are doing and let the elephant do
// its thing. It has to be the first entry in relevant_valves. Otherwise, better paths are
// lost, somehow.
let mut relevant_with_elephant = vec![None];
relevant_with_elephant.extend_from_slice(&relevant_valves);
// Quick mode uses the below assumptions and we use it by default.
let quick_mode = std::env::var("QUICK").unwrap_or("1".to_string()) == "1";
let mut best = 0;
// This is a bit hacky but it works. We check what happens if the human is guaranteed a certain
// number of valves because the elephant can only open those that the human didn't approach.
// Then, we assume that, the fewer valves the human opens, the higher the overal pressure
// release gets because the elephant can open some. At some point, since this is at least a
// certain number of steps, there will no longer be an increase followed by a decrease, which
// is our stopping point. This is an assumption, which tunrs out to hold, but it might not for
// all possible inputs. If it doesn't simply disable quick mode.
for num_human_valves in (0..relevant_valves.len()).rev() {
// Reset some mutable data structures.
visited = HashSet::<_>::with_capacity(relevant_valves.len());
let max = backtrack(
&valve_name_map,
&distances,
&relevant_with_elephant,
&START,
start_time,
max_time_part2,
start_release,
&mut visited,
true,
num_human_valves,
)?;
println!(
"part 2: with {} guaranteed human valves: {}",
num_human_valves, max
);
if quick_mode {
if max < best {
break;
} else {
best = max;
}
} else {
if max > best {
best = max;
}
}
}
println!("part 2: {}", best);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug)]
pub struct Valve {
pub name: u8,
pub rate: usize,
pub neighbours: Vec<u8>,
}
// Using u8 instead of two-character strings speeds the entire thing up by about a factor of 4. If
// you have other character combinations, simply add them here.
pub fn str_to_usize(s: &str) -> Option<u8> {
match s {
"AA" => Some(0),
"AF" => Some(1),
"AK" => Some(2),
"BB" => Some(3),
"BC" => Some(4),
"BF" => Some(5),
"BV" => Some(6),
"CA" => Some(7),
"CC" => Some(8),
"CM" => Some(9),
"DD" => Some(10),
"DO" => Some(11),
"DW" => Some(12),
"EE" => Some(13),
"EI" => Some(14),
"EJ" => Some(15),
"EV" => Some(16),
"FD" => Some(17),
"FF" => Some(18),
"FN" => Some(19),
"GG" => Some(20),
"GO" => Some(21),
"GW" => Some(22),
"HH" => Some(23),
"HO" => Some(24),
"HP" => Some(25),
"HR" => Some(26),
"HX" => Some(27),
"II" => Some(28),
"IR" => Some(29),
"JJ" => Some(30),
"JQ" => Some(31),
"JS" => Some(32),
"KH" => Some(33),
"KL" => Some(34),
"KQ" => Some(35),
"KX" => Some(36),
"LR" => Some(37),
"MS" => Some(38),
"MW" => Some(39),
"NB" => Some(40),
"NC" => Some(41),
"NQ" => Some(42),
"OF" => Some(43),
"OM" => Some(44),
"OQ" => Some(45),
"OX" => Some(46),
"PC" => Some(47),
"PD" => Some(48),
"PH" => Some(49),
"PU" => Some(50),
"QE" => Some(51),
"RX" => Some(52),
"RZ" => Some(53),
"SG" => Some(54),
"SM" => Some(55),
"SY" => Some(56),
"TN" => Some(57),
"TS" => Some(58),
"TY" => Some(59),
"UE" => Some(60),
"VL" => Some(61),
"WE" => Some(62),
"WU" => Some(63),
"WW" => Some(64),
"XG" => Some(65),
"XN" => Some(66),
"YD" => Some(67),
"YQ" => Some(68),
"ZQ" => Some(69),
"ZX" => Some(70),
_ => None,
}
}
impl FromStr for Valve {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let name;
let rate: usize;
let neighbours: Vec<_>;
match s.split(";").collect::<Vec<_>>().as_slice() {
[valve_data, tunnel_data] => {
match valve_data.split_whitespace().collect::<Vec<_>>().as_slice() {
["Valve", id, "has", "flow", rate_str] => {
name =
str_to_usize(id).ok_or(Error::msg("cannot convert chars to usize"))?;
rate = rate_str.trim_start_matches("rate=").parse()?;
}
_ => {
return Err(Error::msg("cannot parse valve data"));
}
}
if !tunnel_data.starts_with(" tunnels lead to valve")
&& !tunnel_data.starts_with(" tunnel leads to valve")
{
return Err(Error::msg("cannot parse tunnel data"));
}
let maybe_neighbours = tunnel_data
.split_whitespace()
.skip(4)
.map(|el| str_to_usize(el.trim().trim_end_matches(",")))
.collect::<Vec<_>>();
// We let this panic if it wants to.
neighbours = maybe_neighbours
.into_iter()
.map(|el| el.unwrap())
.collect::<Vec<_>>();
Ok(Self {
name,
rate,
neighbours,
})
}
_ => Err(Error::msg("cannot parse valve")),
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Part 1 was a straightforward implementation of a tetris-like game.
I used a HashMap
of 2d vectors to represent spots that were occupied by rocks.
If you read the rules carefully and don’t make a mistake with the shape of the
rocks, you’re good.
Part 2 was very tricky for me and I fell into so many traps that I almost gave up. It was clear from the start that you can’t simply keep expanding the playing field until the 1 billionth rock had dropped. Instead, there had to be some repetition involved. Finding the correct point from which to extrapolate into the future was the crux here. It basically took an entire day of background thinking and trying it out every now and again to solve.
There are two infinite streams that need to repeat, the stream of rocks and the stream of air. For the example, the lengths of both are nicely divisible. not so much for the real puzzle. Instead, I check the ground each square rock has settled on after it settled as well as the position in the infinite air stream at that time. If the shape of the ground, the type of rock, and the position in the air stream have the same values again, we found our repetition. Ground extraction is explained in the code and works reasonably well. I even neglect some grounds that would be tricky to work with, which is maybe not even needed.
main.rs
use anyhow::{Error, Result};
use std::collections::{HashMap, HashSet};
use std::iter::Cycle;
use std::vec::IntoIter;
// Constants.
fn render(field: &HashSet<data::Pos>, rock: &data::Rock, pos: &data::Pos) {
let max_x = 8;
let max_y = field.iter().map(|el| el.y).max().unwrap_or(10) + 10;
let min_x = 0;
let min_y = 0;
let rock_fields = rock.occupied_fields(pos).collect::<HashSet<_>>();
for y in (min_y..max_y).rev() {
for x in min_x..=max_x {
let pos = data::Pos { x, y };
if field.contains(&pos) && rock_fields.contains(&pos) {
// This a conflict field.
print!("X")
} else if field.contains(&pos) {
print!("#")
} else if rock_fields.contains(&pos) {
print!("@")
} else if y == 0 {
print!("-")
} else if x == 0 || x == 8 {
print!("|")
} else {
print!(".");
}
}
print!("\n");
}
print!("\n");
}
fn is_blocked(field: &HashSet<data::Pos>, check: &data::Pos, width: isize) -> data::Blocked {
if check.x <= 0 || check.x >= width {
data::Blocked::Wall
} else if field.contains(check) {
data::Blocked::Rock
} else if check.y == 0 {
// This is hacky but we simulate a floor of rocks so that a downward operation will have
// the rock to settle on.
data::Blocked::Rock
} else {
data::Blocked::None
}
}
// Nice means no rock can muddle its way through. There is a direct connection from the left wall
// to the right wall.
fn is_nice_ground(ground: &HashSet<data::Pos>, field: &HashSet<data::Pos>) -> bool {
let mut x = 1;
let mut candidates = ground
.iter()
.filter_map(|el| if el.x == x { Some(el.clone()) } else { None })
.collect::<HashSet<data::Pos>>();
candidates = &candidates & field;
while candidates.len() != 0 {
x += 1;
candidates = candidates
.into_iter()
.map(|el| el.right_env())
.flatten()
.collect::<HashSet<data::Pos>>();
candidates = &candidates & field;
}
x == 8
}
// We try to find the ground the last rock settled on. We basically go from top to the bottom until
// we have found at least one rock at each of the possible 7 x positions. For piece of mind, we
// only consider nice grounds (see is_nice_ground for a definition).
fn get_ground(field: &HashSet<data::Pos>, top_rock: isize) -> Option<HashSet<data::Pos>> {
let mut found = vec![false; 7];
let mut bottom = HashSet::<data::Pos>::new();
let mut min_y = top_rock;
for y in (1..=top_rock).rev() {
min_y = y;
for x in 1..=7 {
let check = data::Pos { x, y };
if field.contains(&check) {
found[(x - 1) as usize] = true;
bottom.insert(check);
}
}
if found.iter().all(|el| *el) {
break;
}
}
if is_nice_ground(&bottom, field) {
let disp = data::Pos { x: 0, y: -min_y };
Some(bottom.into_iter().map(|el| el.add(&disp)).collect())
} else {
None
}
}
// Rep stands for repetition.
#[derive(Debug)]
struct Rep {
round: usize,
top_rock: isize,
ground: HashSet<data::Pos>,
}
// Yeah, we are playing tetris today.
// This function looked OK until after part 1 but now it's horrible, but I don't want to take any
// time to refactor.
fn play_tetris(
mut stream: Cycle<IntoIter<data::Push>>,
mut rocks: Cycle<IntoIter<data::Rock>>,
max_num_rocks: usize,
num_steam_steps: usize,
num_rock_steps: usize,
// trigger for part 2.
do_repeat: bool,
) -> usize {
let mut possible_rep_store = HashMap::<(usize, usize), Vec<Rep>>::new();
// We will track the positions of all rocks with this.
let mut field = HashSet::<data::Pos>::new();
// At the beginning, there is no rock yet. Thus, the top position is the floor.
let mut top_rock = 0;
// Round counts rocks.
let mut round = 0;
// Step counts gusts of air.
let mut step = 0;
while round < max_num_rocks {
round += 1;
let rock = rocks.next().expect("weren't there infinitely many rocks");
// Spwan a rock.
let mut pos = data::Pos {
// There also have to be 2 free spaces in x direction. Thus, it spawns at x
// coordinate 3.
x: 3,
// There have to be 3 free spaces in y direction. Thus, it spwans 4 units
// further above.
y: top_rock + 4,
};
let mut has_settled = false;
while !has_settled {
// Get the next stream element. This is an infinite iterator.
let push = stream.next().expect("wasn't this supposed to be infinite");
step += 1;
// Apply the movement operation to the side.
let next_pushed = push.apply(&pos);
// Check whether there is any collision.
let push_blocked_check = rock
.occupied_fields(&next_pushed)
.map(|el| is_blocked(&field, &el, 8))
.find(|el| el != &data::Blocked::None);
// Accept the movement only if we haven't been blocked.
if let None = push_blocked_check {
pos = next_pushed;
}
// Move downwards and check again.
let next_dropped = pos.drop();
// Check whether there is any collision.
let drop_blocked_check = rock
.occupied_fields(&next_dropped)
.map(|el| is_blocked(&field, &el, 8))
.find(|el| el == &data::Blocked::Rock);
// Accept the movement only if we haven't been blocked.
if let Some(_) = drop_blocked_check {
// The rock has settled!
has_settled = true;
// Find the topmost position and occupy all fields of the rock.
field.extend(rock.occupied_fields(&pos));
// Check whether the rock that just settled reaches higher than before.
let possible_top_rock = rock
.occupied_fields(&pos)
.map(|el| el.y)
.max()
.expect("cannot find top rock");
if possible_top_rock > top_rock {
top_rock = possible_top_rock;
}
} else {
// The rock hasn't settled yet. Accept the update.
pos = next_dropped;
}
}
// This entire block is horrible and takes care of part 2. Ignore it for part 1.
if do_repeat && round % num_rock_steps == 0 {
// If we're in here, a square block has settled.
// Determine our position in the repeating rock and steam streams.
let rep_key = (round % num_rock_steps, step % num_steam_steps);
if let Some(ground) = get_ground(&field, top_rock) {
// If we're in here, then the rock has settled on a patch of ground that is nice
// (i.e. where no rock can muddle its way through).
if let Some(possible_rep) = possible_rep_store.get_mut(&rep_key) {
// If we're in here, we have already found this type of rock settling at this
// position in the stream/gust of air sequence.
// Check whether the ground repeated, too.
if let Some(rep) = possible_rep.iter().find_map(|el| {
if el.ground.len() == ground.len() && (&el.ground ^ &ground).len() == 0 {
Some(el)
} else {
None
}
}) {
// If we're in here, we found the same type of rock that settled at the
// same position in the steam stream AND that rock settled on an identical
// patch of ground as a previous rock. That means everything will repeat
// from here on out.
//
// Clear the field to save memory. Then, fast forward time and use the most
// recently found ground to reinitialise the field.
field.clear();
let rounds_in_loop = round - rep.round;
let increase_per_loop = (top_rock - rep.top_rock) as usize;
let loops_remaining = (max_num_rocks - round) / rounds_in_loop;
round += loops_remaining * rounds_in_loop;
top_rock += (loops_remaining * increase_per_loop) as isize;
// Displace the ground to where it belongs. It will not form the top of the
// field. Because the ground is nice, no rock can fall through it.
let top_rock_in_ground = ground
.iter()
.map(|el| el.y)
.max()
.expect("there is no ground");
let disp = data::Pos {
x: 0,
y: top_rock - top_rock_in_ground,
};
field = ground.into_iter().map(|el| el.add(&disp)).collect();
} else {
// Remember this ground.
possible_rep.push(Rep {
round,
top_rock,
ground,
});
}
} else {
// Remember the ground for this combination of rock and position in the steam
// stream.
let rep_val = Rep {
round,
top_rock,
ground,
};
possible_rep_store.insert(rep_key, vec![rep_val]);
}
}
}
}
top_rock as usize
}
fn solve(file: &str, max_num_rocks: usize) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
let stream = io::parse_chunks_to_data::<data::Stream>(
io::read_lines_from_file(file, 1)?,
"stream",
None,
None,
)?
.into_iter()
.nth(0)
.ok_or(Error::msg("found no stream"))?;
let num_pushes = stream.flow.len();
println!("push sequence has {} elements", num_pushes);
let num_rocks = std::mem::variant_count::<data::Rock>();
println!("rock sequence has {} elements", num_rocks);
let tallness = play_tetris(
stream.infinite(),
data::Rock::infinite_stream(),
max_num_rocks,
num_pushes,
num_rocks,
// Works but hacky trigger for part 2 ^^.
max_num_rocks > 10_000,
);
println!(
"tower will be {} tall after {} rocks",
tallness, max_num_rocks
);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1, 2022)?;
solve(REAL, 2022)?;
solve(SAMPLE1, 1_000_000_000_000)?;
solve(REAL, 1_000_000_000_000)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug, Clone)]
pub enum Push {
Left,
Right,
}
#[derive(Debug, Hash, Eq, PartialEq, Clone)]
pub struct Pos {
pub x: isize,
pub y: isize,
}
#[derive(Debug)]
pub struct Stream {
pub flow: Vec<Push>,
}
// The position of each rock is indicated by the point in its bottom left. There doesn't have to be
// anything there?
#[derive(Debug, Clone, PartialEq)]
pub enum Rock {
Minus,
Plus,
InverseL,
I,
Block,
}
#[derive(Debug, PartialEq)]
pub enum Blocked {
Rock,
Wall,
None,
}
impl Pos {
pub fn add(&self, other: &Self) -> Self {
Self {
x: self.x + other.x,
y: self.y + other.y,
}
}
pub fn drop(&self) -> Self {
Self {
x: self.x,
y: self.y - 1,
}
}
pub fn right_env(&self) -> Vec<Self> {
vec![
Self {
x: self.x + 1,
y: self.y + 1,
},
Self {
x: self.x + 1,
y: self.y,
},
Self {
x: self.x + 1,
y: self.y - 1,
},
]
}
}
impl Push {
pub fn apply(&self, start: &Pos) -> Pos {
match self {
Self::Left => Pos {
x: -1 + start.x,
y: start.y,
},
Self::Right => Pos {
x: 1 + start.x,
y: start.y,
},
}
}
}
impl Rock {
// Get an infinite stream of rocks in the correct order.
pub fn infinite_stream() -> std::iter::Cycle<std::vec::IntoIter<Rock>> {
vec![
Self::Minus,
Self::Plus,
Self::InverseL,
Self::I,
Self::Block,
]
.into_iter()
.cycle()
}
// We order the returned positions in such a way that we have the highest likelihood of getting
// a collision early on.
pub fn occupied_fields(&self, start: &Pos) -> std::vec::IntoIter<Pos> {
match self {
Self::Minus => vec![
Pos {
x: 0 + start.x,
y: 0 + start.y,
},
Pos {
x: 3 + start.x,
y: 0 + start.y,
},
Pos {
x: 1 + start.x,
y: 0 + start.y,
},
Pos {
x: 2 + start.x,
y: 0 + start.y,
},
],
// This one has no rock at 0,0!
Self::Plus => vec![
Pos {
x: 1 + start.x,
y: 0 + start.y,
},
Pos {
x: 2 + start.x,
y: 1 + start.y,
},
Pos {
x: 0 + start.x,
y: 1 + start.y,
},
Pos {
x: 1 + start.x,
y: 1 + start.y,
},
Pos {
x: 1 + start.x,
y: 2 + start.y,
},
],
Self::Block => vec![
Pos {
x: 0 + start.x,
y: 0 + start.y,
},
Pos {
x: 1 + start.x,
y: 0 + start.y,
},
Pos {
x: 0 + start.x,
y: 1 + start.y,
},
Pos {
x: 1 + start.x,
y: 1 + start.y,
},
],
Self::I => vec![
Pos {
x: 0 + start.x,
y: 0 + start.y,
},
Pos {
x: 0 + start.x,
y: 3 + start.y,
},
Pos {
x: 0 + start.x,
y: 1 + start.y,
},
Pos {
x: 0 + start.x,
y: 2 + start.y,
},
],
Self::InverseL => vec![
Pos {
x: 0 + start.x,
y: 0 + start.y,
},
Pos {
x: 1 + start.x,
y: 0 + start.y,
},
Pos {
x: 2 + start.x,
y: 0 + start.y,
},
Pos {
x: 2 + start.x,
y: 1 + start.y,
},
Pos {
x: 2 + start.x,
y: 2 + start.y,
},
],
}
.into_iter()
}
}
impl Stream {
// This consumes the stream object, but we don't need it anymore.
pub fn infinite(self) -> std::iter::Cycle<std::vec::IntoIter<Push>> {
self.flow.into_iter().cycle()
}
}
impl FromStr for Stream {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
Ok(Self {
flow: s
.chars()
.map(|el| match el {
'>' => Push::Right,
'<' => Push::Left,
_ => panic!("oh no, we got weird input"),
})
.collect(),
})
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Today was easy, yeah! For part 1, simply put all lava points in a set. Then, for each lava point, check for each neighbour whether that one is a lava point itself. If not, count it.
Part 2 was a bit more tricky and I lost some time due to an overflow that I didn’t notice at first. (Note to self: Always run in development mode first and only switch to release mode if the speed up is needed because release mode will silently ignore overflows.)
I was wondering how to detect air pockets.
My idea was to repurpose the A*
algorithm developed for day 12.
I first extended it to 3 dimensions, which was straightforward.
Then, I found the smallest cuboid that contains all lava.
Then, I had A*
search a path from each block in that cuboid to a block in a
top corner.
To ensure that I would always be able to find a path from any block that is
conncted to the outside, I added one layer of air to each of the 6 sides of the
cuboid.
Whenever I could not find a path, all blocks that the algorithm looked at are
known to be part of air pockets.
Whenever a path could be found, all blocks that the algorithm looked at are
known to not be air pockets.
That helps reduce the number of blocks to look at.
In the end, I fill all air pockets with fake lava and apply the same algorithm from part 1 again. I guess a breadth first search would have been more applicable, but I already had a working pathfinding algorithm, so I repurposed it.
main.rs
use anyhow::{Error, Result};
use std::collections::{HashMap, HashSet};
// Constants.
fn find_path<'a>(
start: &'a data::Node,
end: &'a data::Node,
graph: &'a HashSet<data::Node>,
estimator_fn: fn(&data::Node, &data::Point) -> usize,
) -> Result<HashMap<&'a data::Node, (Option<data::Point>, usize)>> {
let ref_point = end.pos();
let estimator = move |node: &data::Node| estimator_fn(node, &ref_point);
let get_node = move |node: &data::Point| {
graph
.get(&node.as_node())
.ok_or(Error::msg("node not found"))
};
let mut connections = HashMap::<&'a data::Node, (Option<data::Point>, usize)>::new();
let mut checkable = HashMap::<&data::Node, (data::Point, usize)>::new();
// Add starting point to resulting path.
connections.insert(&start, (None, 0));
// Add neighbours of starting point to list of checkable values. Ignore neighbouring points
// that are not part of the graph.
for neigh in start.neighbours() {
let neigh_node = get_node(neigh)?;
// Estimated costs are the most direct possible connection plus 1, since every step costs
// one.
checkable.insert(neigh_node, (start.pos(), estimator(neigh_node)));
// connections.insert(neigh_node, (Some(start.pos()), 1));
}
// Search until we added the final node to the path or until there is nothing more to check.
while !connections.contains_key(&end) && checkable.len() > 0 {
// Get node with minimum _estimated_ cost.
let next_best_node = checkable
.iter_mut()
// Get node with minimum estimated cost.
.min_by(|(_node1, (_pre1, cost1)), (_node2, (_pre2, cost2))| cost1.cmp(&cost2))
.ok_or(Error::msg("cannot find next node"))?
.0
.clone();
let (_, (predecessor, _old_estimate)) = checkable
.remove_entry(next_best_node)
.ok_or(Error::msg("cannot find predecessor"))?;
let cost_of_predecessor = connections
.get(&predecessor.as_node())
.ok_or(Error::msg("predecessor has not been visited"))?
.1;
// Add point to resulting path.
connections.insert(next_best_node, (Some(predecessor), cost_of_predecessor + 1));
// Add neighbours of point to list of checkable values.
for neigh in next_best_node.neighbours() {
let neigh_node = get_node(neigh)?;
if !connections.contains_key(neigh_node) {
let estimate = cost_of_predecessor + estimator(neigh_node);
let previous_best = checkable
.get(neigh_node)
.unwrap_or(&(*neigh, std::usize::MAX))
.1;
if previous_best > estimate {
checkable.insert(neigh_node, (next_best_node.pos(), estimate));
}
// connections.insert(neigh_node, Some(start.pos()));
}
}
}
Ok(connections)
}
fn build_graph(
lava: &HashSet<data::Point>,
min: &data::Point,
max: &data::Point,
) -> HashSet<data::Node> {
let mut graph = HashSet::<data::Node>::new();
for x in min.x..=max.x {
for y in min.y..=max.y {
for z in min.z..=max.z {
let point = data::Point { x, y, z };
// Ignore points that are themselves lava.
if let Some(_) = lava.get(&point) {
continue;
}
// Find all neihhbours, which are points that are still within the boundaries and
// not lava.
let neighbours = point
.env()
.into_iter()
.filter_map(|el| {
// The point does not contain lava.
if let None = lava.get(&el) {
// The point is still within the cube boundaries.
if el.x >= min.x
&& el.x <= max.x
&& el.y >= min.y
&& el.y <= max.y
&& el.z >= min.z
&& el.z <= max.z
{
Some(el)
} else {
None
}
} else {
None
}
})
.collect();
let node = data::Node {
p: point,
neighbours,
};
graph.insert(node);
}
}
}
graph
}
fn solve(file: &str) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
let lava_lines = io::parse_chunks_to_data::<data::Point>(
io::read_lines_from_file(file, 1)?,
"pos",
None,
None,
)?;
// Part 1.
let lava = HashSet::<data::Point>::from_iter(lava_lines.into_iter());
let surface_area_part1 = lava
.iter()
.map(|el| el.env().into_iter())
.flatten()
.filter(|el| !lava.contains(el))
.count();
println!("for part 1, the surface area is {}", surface_area_part1);
// Part 2.
let min_x = lava
.iter()
.map(|el| el.x)
.min()
.ok_or(Error::msg("min x"))?;
let max_x = lava
.iter()
.map(|el| el.x)
.max()
.ok_or(Error::msg("max x"))?;
let min_y = lava
.iter()
.map(|el| el.y)
.min()
.ok_or(Error::msg("min y"))?;
let max_y = lava
.iter()
.map(|el| el.y)
.max()
.ok_or(Error::msg("max y"))?;
let min_z = lava
.iter()
.map(|el| el.z)
.min()
.ok_or(Error::msg("min z"))?;
let max_z = lava
.iter()
.map(|el| el.z)
.max()
.ok_or(Error::msg("max z"))?;
let min = data::Point {
x: min_x - 1,
y: min_y - 1,
z: min_z - 1,
};
// Max is also the target for the pahtfinding algorithm.
let max = data::Point {
x: max_x + 1,
y: max_y + 1,
z: max_z + 1,
};
// Construct graph that does not contain diagonal connections.
let graph = build_graph(&lava, &min, &max);
// This is the heuristic needed for A*.
let estimator = |node: &data::Node, ref_point: &data::Point| node.infinity_dist(&ref_point);
// Define the end point of the paht finding.
let end = graph
.get(&max.as_node())
.ok_or(Error::msg("cannot find end node in graph"))?;
// println!("{:?}", graph);
let mut air_pockets = HashSet::<data::Point>::new();
let mut no_air_pockets = HashSet::<data::Point>::new();
for x in min.x..=max.x {
for y in min.y..=max.y {
for z in min.z..=max.z {
let start_point = data::Point { x, y, z };
if let Some(_) = lava.get(&start_point) {
// Do not try to find paths that start at lava.
continue;
}
if let Some(_) = air_pockets.get(&start_point) {
// Do not try to find paths that start at air pockets.
continue;
}
if let Some(_) = no_air_pockets.get(&start_point) {
// Do not try to find paths that start at a connected point.
continue;
}
let start = graph
.get(&start_point.as_node())
.ok_or(Error::msg("cannot find start node in graph"))?;
// try to find the path to the max node.
let path = find_path(&start, &end, &graph, estimator)?;
// If the end node is not in the found path, we discovered an air pocket.
if let None = path.get(&end) {
air_pockets = &air_pockets | &path.into_iter().map(|el| el.0.p).collect();
} else {
no_air_pockets = &no_air_pockets | &path.into_iter().map(|el| el.0.p).collect();
}
}
}
}
let lava_with_filled_pockets = &lava | &air_pockets;
// println!("{} => {:?} {:?}", lava.len(), min, max,);
let surface_area_part2 = lava_with_filled_pockets
.iter()
.map(|el| el.env().into_iter())
.flatten()
.filter(|el| !lava_with_filled_pockets.contains(el))
.count();
println!("for part 2, the surface area is {}", surface_area_part2);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
use std::hash::{Hash, Hasher};
use std::str::FromStr;
#[derive(Debug, Hash, Eq, PartialEq, Clone, Copy)]
pub struct Point {
pub x: i8,
pub y: i8,
pub z: i8,
}
impl Point {
pub fn add(&self, other: &Self) -> Self {
Self {
x: self.x + other.x,
y: self.y + other.y,
z: self.z + other.z,
}
}
pub fn env(&self) -> Vec<Self> {
vec![
Self {
x: self.x - 1,
y: self.y,
z: self.z,
},
Self {
x: self.x + 1,
y: self.y,
z: self.z,
},
Self {
x: self.x,
y: self.y - 1,
z: self.z,
},
Self {
x: self.x,
y: self.y + 1,
z: self.z,
},
Self {
x: self.x,
y: self.y,
z: self.z - 1,
},
Self {
x: self.x,
y: self.y,
z: self.z + 1,
},
]
}
pub fn as_node(&self) -> Node {
Node {
p: *self,
neighbours: HashSet::<Point>::new(),
}
}
}
impl FromStr for Point {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split(",").collect::<Vec<_>>().as_slice() {
[x, y, z] => Ok(Self {
x: x.parse()?,
y: y.parse()?,
z: z.parse()?,
}),
_ => Err(Error::msg("cannot parse pos")),
}
}
}
#[derive(Debug, Clone)]
pub struct Node {
pub p: Point,
pub neighbours: HashSet<Point>,
}
impl Node {
pub fn pos(&self) -> Point {
self.p
}
pub fn neighbours<'a>(&'a self) -> &'a HashSet<Point> {
&self.neighbours
}
pub fn infinity_dist(&self, other: &Point) -> usize {
(self.p.x - other.x).abs() as usize
+ (self.p.y - other.y).abs() as usize
+ (self.p.z - other.z).abs() as usize
}
}
// We identify a node only by its position.
impl Hash for Node {
fn hash<H: Hasher>(&self, state: &mut H) {
self.p.hash(state)
}
}
impl PartialEq for Node {
fn eq(&self, other: &Self) -> bool {
self.p == other.p
}
}
impl Eq for Node {}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Today, I first went on a wild goose chase because I had a fancy idea, which totally didn’t work out but cost quite a bit of time. My second attempt was thus a straightforward implementation of the instructions with a cache because state values often reapeat themselves. The cache associates each world state with its best geode value. Note that a state includes the remaining simulation time. Thus, if we find a value that’s in the cache, we can avoid go compute the same world progression again because the state uniquely identifies how the world will progress from that point on.
In my input for part 1, there was one blueprint that didn’t work out with a straightforward implementation of the cache. That is, the cache grew so large that my computer ran out of RAM. Thus, I only cached values until some time steps before the end.
main.rs
use anyhow::{Error, Result};
use std::collections::{HashMap, HashSet};
// Constants.
const LRU_THRESHOLD: data::Size = 8;
fn is_env(var: &str, val: &str, def: &str) -> bool {
std::env::var(var).unwrap_or(def.to_string()) == val
}
fn exhaustive_search(
state: data::State,
bp: &data::Blueprint,
actions: &Vec<data::WhatToBuild>,
lru: &mut HashMap<data::State, data::Size>,
total_best: &mut data::Size,
) -> data::Size {
if state.time == 0 {
return state.geode;
} else if state.time >= LRU_THRESHOLD {
if let Some(lru_val) = lru.get(&state) {
return *lru_val;
}
} else if state.geode + state.time * state.geode_robots + (state.time - 1) * (state.time - 1)
< *total_best
{
// Return early if a very optimistic estimate of what we can still achieve is lower than
// the best we've already found.
return state.geode;
}
let mut best = state.geode;
for act in actions.iter() {
if let Some(next) = state.next(bp, act) {
let possible_best = exhaustive_search(next, bp, actions, lru, total_best);
if possible_best > best {
best = possible_best;
if best > *total_best {
*total_best = best;
}
}
}
}
// Remember the value we found, but only for early ones.
if state.time >= LRU_THRESHOLD {
lru.insert(state, best);
}
return best;
}
fn solve(file: &str, part1: bool) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
let blueprints = io::parse_chunks_to_data::<data::Blueprint>(
io::read_lines_from_file(file, 1)?,
"blueprint",
None,
None,
)?;
let actions = vec![
data::WhatToBuild::GeodeR,
data::WhatToBuild::ObsidianR,
data::WhatToBuild::ClayR,
data::WhatToBuild::OreR,
data::WhatToBuild::Nothing,
];
if part1 {
let mut best_vals = vec![];
for (idx, bp) in blueprints.iter().enumerate() {
let mut lru = HashMap::<data::State, data::Size>::new();
let state = data::State::start(24);
let mut best_cache = 0;
let best = exhaustive_search(state, bp, &actions, &mut lru, &mut best_cache);
println!("best for {} is {}", idx + 1, best);
best_vals.push(best);
}
let quality_level = best_vals
.into_iter()
.zip(blueprints.iter())
.map(|(val, bp)| bp.id as usize * val as usize)
.sum::<usize>();
println!("the overall quality level is: {}", quality_level);
} else {
let mut best_vals = vec![];
for (idx, bp) in blueprints.iter().take(3).enumerate() {
// Sadly, we cannot reuse the LRU cache for other blueprints.
let mut lru = HashMap::<data::State, data::Size>::new();
let mut best_cache = 0;
let state = data::State::start(32);
let best = exhaustive_search(state, bp, &actions, &mut lru, &mut best_cache);
println!("best for {} is {}", idx + 1, best);
best_vals.push(best);
}
let quality_level = best_vals
.into_iter()
.map(|el| el as usize)
.product::<usize>();
println!("the overall quality level is: {}", quality_level);
}
Ok(())
}
fn main() -> Result<()> {
if is_env("RUN", "0", "0") {
solve(SAMPLE1, true)?;
}
if is_env("RUN", "1", "1") {
solve(REAL, true)?;
}
if is_env("RUN", "2", "2") {
solve(SAMPLE1, false)?;
}
if is_env("RUN", "3", "3") {
solve(REAL, false)?;
}
Ok(())
}
data.rs
use anyhow::{Error, Result};
use rand::distributions::{Distribution, Standard};
use rand::Rng;
use std::collections::HashSet;
use std::hash::{Hash, Hasher};
use std::str::FromStr;
pub type Size = u16;
#[derive(Debug, Clone)]
pub struct Blueprint {
pub id: Size,
pub ore_ore_cost: Size,
pub clay_ore_cost: Size,
pub obsidian_ore_cost: Size,
pub obsidian_clay_cost: Size,
pub geode_ore_cost: Size,
pub geode_obsidian_cost: Size,
}
#[derive(Debug, PartialEq)]
pub enum WhatToBuild {
OreR,
ClayR,
ObsidianR,
GeodeR,
Nothing,
}
impl FromStr for Blueprint {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split_whitespace().collect::<Vec<_>>().as_slice() {
["Blueprint", id, "Each", "ore", "robot", "costs", ore_ore_cost, "ore.", "Each", "clay", "robot", "costs", clay_ore_cost, "ore.", "Each", "obsidian", "robot", "costs", obsidian_ore_cost, "ore", "and", obsidian_clay_cost, "clay.", "Each", "geode", "robot", "costs", geode_ore_cost, "ore", "and", geode_obsidian_cost, "obsidian."] => {
Ok(Self {
id: id.trim_end_matches(":").parse()?,
ore_ore_cost: ore_ore_cost.parse()?,
clay_ore_cost: clay_ore_cost.parse()?,
obsidian_ore_cost: obsidian_ore_cost.parse()?,
obsidian_clay_cost: obsidian_clay_cost.parse()?,
geode_ore_cost: geode_ore_cost.parse()?,
geode_obsidian_cost: geode_obsidian_cost.parse()?,
})
}
_ => Err(Error::msg("cannot parse blueprint")),
}
}
}
#[derive(Debug, PartialEq, Eq, Hash)]
pub struct State {
pub time: Size,
pub ore: Size,
pub ore_robots: Size,
pub clay: Size,
pub clay_robots: Size,
pub obsidian: Size,
pub obsidian_robots: Size,
pub geode: Size,
pub geode_robots: Size,
}
impl State {
pub fn start(available_time: Size) -> Self {
Self {
time: available_time,
ore: 0,
ore_robots: 1,
clay: 0,
clay_robots: 0,
obsidian: 0,
obsidian_robots: 0,
geode: 0,
geode_robots: 0,
}
}
pub fn next(&self, bp: &Blueprint, act: &WhatToBuild) -> Option<Self> {
// Only perform the drawn operation if we have enough resources for it. Otherwise,
// implicitly perform a "nothing" operation.
match act {
WhatToBuild::Nothing => {
Some(Self {
time: self.time - 1,
// Materials.
ore: self.ore + self.ore_robots,
clay: self.clay + self.clay_robots,
obsidian: self.obsidian + self.obsidian_robots,
geode: self.geode + self.geode_robots,
// Robots.
ore_robots: self.ore_robots,
clay_robots: self.clay_robots,
obsidian_robots: self.obsidian_robots,
geode_robots: self.geode_robots,
})
}
WhatToBuild::OreR => {
if self.ore < bp.ore_ore_cost {
None
} else {
Some(Self {
time: self.time - 1,
// Materials.
ore: self.ore + self.ore_robots - bp.ore_ore_cost,
clay: self.clay + self.clay_robots,
obsidian: self.obsidian + self.obsidian_robots,
geode: self.geode + self.geode_robots,
// Robots.
ore_robots: self.ore_robots + 1,
clay_robots: self.clay_robots,
obsidian_robots: self.obsidian_robots,
geode_robots: self.geode_robots,
})
}
}
WhatToBuild::ClayR => {
if self.ore < bp.clay_ore_cost {
None
} else {
Some(Self {
time: self.time - 1,
// Materials.
ore: self.ore + self.ore_robots - bp.clay_ore_cost,
clay: self.clay + self.clay_robots,
obsidian: self.obsidian + self.obsidian_robots,
geode: self.geode + self.geode_robots,
// Robots.
ore_robots: self.ore_robots,
clay_robots: self.clay_robots + 1,
obsidian_robots: self.obsidian_robots,
geode_robots: self.geode_robots,
})
}
}
WhatToBuild::ObsidianR => {
if self.ore < bp.obsidian_ore_cost || self.clay < bp.obsidian_clay_cost {
None
} else {
Some(Self {
time: self.time - 1,
// Materials.
ore: self.ore + self.ore_robots - bp.obsidian_ore_cost,
clay: self.clay + self.clay_robots - bp.obsidian_clay_cost,
obsidian: self.obsidian + self.obsidian_robots,
geode: self.geode + self.geode_robots,
// Robots.
ore_robots: self.ore_robots,
clay_robots: self.clay_robots,
obsidian_robots: self.obsidian_robots + 1,
geode_robots: self.geode_robots,
})
}
}
WhatToBuild::GeodeR => {
if self.ore < bp.geode_ore_cost || self.obsidian < bp.geode_obsidian_cost {
None
} else {
Some(Self {
time: self.time - 1,
// Materials.
ore: self.ore + self.ore_robots - bp.geode_ore_cost,
clay: self.clay + self.clay_robots,
obsidian: self.obsidian + self.obsidian_robots - bp.geode_obsidian_cost,
geode: self.geode + self.geode_robots,
// Robots.
ore_robots: self.ore_robots,
clay_robots: self.clay_robots,
obsidian_robots: self.obsidian_robots,
geode_robots: self.geode_robots + 1,
})
}
}
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
Today was both frustrating and nice.
At first, I came up with a complex approach that used an index map and its
reverse to keep track of where an element was originally and where it is right
now.
But then I realised that rust’s vectors have an insert
and a remove
method
that does just what was needed here.
Luckily, I discarded my first idea pretty quickly.
Furthermore, as I discovered during one of the previous days, an iterator over a
vector in rust can be made infinite via the cyclic
method.
Thus, for part 1, I didn’t have to care about the cyclic nature of the problem.
In fact, the vector I get in the end doesn’t look like the one online, but it
also doesn’t have to.
It’s just that each element has to have the same neighbours as in the example,
which they do.
What I do is I keep a vector that consists of a tuple of the actual number and its original index. I use that original index to find the number in the vector when it’s its turn to be mixed. That’s basically part 1. At first, I made a stupid mistake when extracting the three numbers that make up the grove coordinates and took the first three in steps of one thousand but including zero. Unfortunately, I tried for quite a bit to apply a fix to the wrong location.
Part 2 was just like part 1 but with some modulo operations applied. You just need to be careful to take the modulus with respect to the length of the file reduced by one. For more details, please see the code. I added extensive comments.
main.rs
use anyhow::{Error, Result};
// Constants.
fn solve(file: &str, mixes: usize, decryption_key: data::Size) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data. We use a custom struct here just so we can continue using
// our parser function.
let nums = io::parse_chunks_to_data::<data::Num>(
io::read_lines_from_file(file, 1)?,
"blueprint",
None,
None,
)?;
// Convert custom struct into vector of primitive types.
let mut file = nums
.iter()
// For part 1, the decryption_key is 1.
.map(|el| el.num() * decryption_key)
// Remember the index in the original list. This is used to find the data point later on
// based on its index in the original input.
.enumerate()
.collect::<Vec<_>>();
// Two convenience values that will be used further down.
// Let's always remember how many values there were originally.
let len = file.len();
// After removing an element from the vector, this is gonna be its length. Thus, this is the
// value with respect to which we need to take the modulus when deciding how many cycles to
// skip.
let mod_me = file.len() - 1;
// We will be mixing `mixes` times.
for _ in 0..mixes {
// We will keep mixing in the originalorder.
for org_idx in 0..len {
// Find the element that was at `org_idx` in the initial input and remember its value
// and current index.
let move_me_data = file
.iter()
.position(|&el| el.0 == org_idx)
.ok_or(Error::msg("cannot find element"))?;
// Zero doesn't move so we skip it.
if file[move_me_data].1 == 0 {
continue;
}
// Remove the element from the vector. When imagining the cyclic list, it becomes clear
// that a number will never encounter itself. Thus, it is the same as if we were
// moving through an infinite vector that doesn't contain the number.
let move_me = file.remove(move_me_data);
// Below, we use a nice feature of Rust's iterators over vectors, namely that they can
// be cyclic. Thus, we don't actually care whether our resulting vector looks as it
// does in the example because it is cyclic anyway.
if move_me.1 < 0 {
// Move to the left. Here, we iterate through the vector backwards.
let prev_elem_pos = file
.iter()
.enumerate()
// Reverse the iterator's direction here.
.rev()
// Turn it into an inifinte iterator. Thus, we won't have to care about
// wrap-arounds.
.cycle()
// Skip as many elements until we are at the location of the element we just
// removed. Doing so when moving backwards is a bit of a hassle but it works.
.skip(file.len() - move_me_data)
// Skip as often as we need to according to the value of the number.
// Make this efficient by taking the modulus with respect to the current length
// of the vector.
.skip((move_me.1.abs() as usize) % mod_me)
// Get the next element in the iterator, which is the element that is currently
// just before the position we want our removed element to occupy.
.next()
.ok_or(Error::msg("this should be infinite"))?
.0;
// Insert after the element we just found. Thanks, Rust, that `insert` also works
// if the position where we want to insert is one past the end.
file.insert((prev_elem_pos + 1).rem_euclid(file.len()), move_me);
} else if move_me.1 > 0 {
// Move to the right. Here, we iterate in forward direction.
let next_elem_pos = file
.iter()
.enumerate()
// Turn it into an inifinte iterator. Thus, we won't have to care about
// wrap-arounds.
.cycle()
// Skip as many elements until we are at the location of the element we just
// removed.
.skip(move_me_data)
// Skip as often as we need to according to the value of the number.
// Make this efficient by taking the modulus with respect to the current length
// of the vector.
.skip((move_me.1.abs() as usize) % mod_me)
// Get the next element, which is the element that is currently at the position
// we want our removed element to occupy.
.next()
.ok_or(Error::msg("this should be infinite"))?
.0;
// Insert at the position of the element we just found.
file.insert(next_elem_pos, move_me);
} else {
unreachable!("there are no more numbers");
}
}
}
// Extract the desired sum in a lazy way. Simply iterate 1, 2 and 3 thousand times over an ever
// repeating instance of our iterator.
//
// Find the index of 0 first. This helps with debugging in case 0 is removed by accident.
let start_idx = file
.iter()
.enumerate()
.find_map(|(idx, (_, num))| if num == &0 { Some(idx) } else { None })
.ok_or(Error::msg("cannot find zero"))?;
// Then take the sum.
let grove_coords = file
.into_iter()
// Take apart so that we only keep the data. We no longer care about original indices.
.map(|(_idx, el)| el)
.collect::<Vec<_>>()
.into_iter()
// Infinite iterators again.
.cycle()
// Skip until we are at the location of 0.
.skip(start_idx)
// Find current indices.
.enumerate()
// Take only every 1000'th element after zero. Note that 0 also passes here.
.filter_map(|(idx, el)| if idx % 1000 == 0 { Some(el) } else { None })
// We take 4 here because the first one fulfilling the condition will be zero.
.take(4)
// Skip zero.
.skip(1)
.sum::<data::Size>();
println!(
"solution for {} mix(es) and a key of {} is {}",
mixes, decryption_key, grove_coords
);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1, 1, 1)?;
solve(REAL, 1, 1)?;
solve(SAMPLE1, 10, 811_589_153)?;
solve(REAL, 10, 811_589_153)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
pub type Size = i64;
#[derive(Debug)]
pub struct Num(Size);
impl Num {
pub fn num(&self) -> Size {
self.0
}
}
impl FromStr for Num {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
Ok(Self(s.parse()?))
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
No time to explain, barely enough time to implement.
Part 1 is straightforward, simply fill in all monkey values that you can each
round until you found the value for root
.
Part 2 was more tricky. Looking at the data, it becomes clear that only one side of the equation depends on the human’s value while the other one is constant. My code determines that side automatically.
Furthermore, you need to realise that the value of the other side changes monotonically depending on the human’s value. Then, it’s a simple search over the available parameter space. I opted for a search with more steps than a binary search and it works well enough for my inputs. I also assume that the number we need to shout is positive since all monkeys shout positive values themselves. That might not hold for all inputs.
My solytion might not work for all inputs but it should be easy to fix. I added some checks that panic if they fail in orer to catch edge cases but those didn’t occur for me.
main.rs
use anyhow::{Error, Result};
use std::cmp::Ordering;
use std::collections::HashMap;
// Constants.
fn do_op(op: char, val1: &isize, val2: &isize) -> (isize, isize) {
match op {
'+' => (val1 + val2, 0),
'-' => (val1 - val2, 0),
'*' => (val1 * val2, 0),
'/' => (val1 / val2, val1 % val2),
_ => panic!("unknown op"),
}
}
fn do_op_str(op: char, str1: &str, str2: &str) -> String {
match op {
'*' => format!("{}*{}", str1, str2),
'+' => format!("({}+{})", str1, str2),
'-' => format!("({}-{})", str1, str2),
'/' => format!("({}/{})", str1, str2),
_ => panic!("unknown op"),
}
}
fn monkeys_again(
monkeys: &Vec<data::Monkey>,
human: isize,
part1: bool,
) -> Result<(isize, isize, char, isize, bool)> {
// A map from monkey names to their numbers for those monkeys that already know them.
let mut known = monkeys
.iter()
.filter_map(|el| {
if let data::Action::Shout(num) = el.action {
Some((el.name.clone(), num))
} else {
None
}
})
.collect::<HashMap<_, _>>();
if !part1 {
known.insert("humn".to_string(), human);
}
// We use this to determine which side of part 2's root equation is independent of the human
// value.
let mut known_str = monkeys
.iter()
.filter_map(|el| {
if let data::Action::Shout(num) = el.action {
Some((el.name.clone(), format!("{}", num)))
} else {
None
}
})
.collect::<HashMap<_, _>>();
known_str.insert("humn".to_string(), "humn".to_string());
// A map from monkey names to their ops for those monkeys that don't yet know their numbers.
let mut unknown = monkeys
.iter()
.filter_map(|el| {
if let data::Action::Op(mon1, mon2, op) = &el.action {
if !known.contains_key(&el.name) {
Some((el.name.clone(), (mon1, mon2, op)))
} else {
None
}
} else {
None
}
})
.collect::<HashMap<_, _>>();
let root_name = "root".to_string();
let mut largest_remainder = 0;
while unknown.len() > 0 {
let mut moved = vec![];
for (name, (mon1, mon2, &op)) in unknown.iter() {
if let Some(val1) = known.get(mon1.as_str()) {
if let Some(val2) = known.get(mon2.as_str()) {
if name == &root_name {
// We've reached the end.
if !part1 {
return Ok((*val1, *val2, op, largest_remainder, false));
} else {
let str1 = known_str.get(mon1.as_str()).expect("mon1 not found");
let str2 = known_str.get(mon2.as_str()).expect("mon2 not found");
let left_has_human = if !str1.contains("humn") && str2.contains("humn")
{
false
} else if str1.contains("humn") && !str2.contains("humn") {
true
} else {
return Err(Error::msg("neither side depends on the human"));
};
return Ok((*val1, *val2, op, largest_remainder, left_has_human));
}
} else {
let result = do_op(op, val1, val2);
if result.1.abs() > largest_remainder {
largest_remainder = result.1.abs();
}
known.insert(name.clone(), result.0);
// Handle string representation.
if part1 {
let str1 = known_str.get(mon1.as_str()).expect("mon1 not found");
let str2 = known_str.get(mon2.as_str()).expect("mon2 not found");
known_str.insert(name.clone(), do_op_str(op, str1, str2));
}
// We will remove the ones in thos vector from the map of unknown values.
moved.push(name.clone());
}
}
}
}
for now_known in moved {
unknown.remove(&now_known);
}
}
Err(Error::msg("we didn't find the root monkey"))
}
fn solve(file: &str) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data. We use a custom struct here just so we can continue using
// our parser function.
let monkeys = io::parse_chunks_to_data::<data::Monkey>(
io::read_lines_from_file(file, 1)?,
"monkey",
None,
None,
)?;
let (left_root_val, right_root_val, root_op, largest_remainder, left_has_human) =
monkeys_again(&monkeys, 0, true)?;
assert_eq!(largest_remainder, 0, "remainder is zero");
assert!(
left_has_human,
"right value is constant, swap left and right values if it isn't"
);
let (root_val, _) = do_op(root_op, &left_root_val, &right_root_val);
println!("root's value is {}", root_val);
let start = 0;
let end = std::isize::MAX;
let mut step = 1_000_000_000_000;
// This is hacky but it works. We assume a certain ordering for the left and right values at
// zero. If the ordering is not what we expect, we simply multiply both sides by -1, which
// "fixes" the ordering.
let (zero_left_root_val, zero_right_root_val, _, _, _) = monkeys_again(&monkeys, 0, false)?;
let inv = if zero_left_root_val > zero_right_root_val {
1
} else {
-1
};
// Part 2.
// We assume the number we need to shout is positive.
let mut check = start;
let mut found = false;
while !found && end - step > check {
let (left_root_val, right_root_val, _, largest_remainder, _) =
monkeys_again(&monkeys, check, false)?;
// println!("{} == {}", left_root_val, right_root_val);
match (inv * left_root_val).cmp(&(inv * right_root_val)) {
Ordering::Equal => {
if largest_remainder == 0 {
found = true;
println!("we need to shout {}\n", check);
} else {
panic!("we found a non-zero remainder");
}
}
Ordering::Less => {
check -= step;
step /= 10;
if step == 0 {
step = 1;
}
}
Ordering::Greater => {
check += step;
}
}
}
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::collections::HashMap;
use std::str::FromStr;
#[derive(Debug)]
pub struct Monkey {
pub name: String,
pub action: Action,
}
#[derive(Debug)]
pub enum Action {
Shout(isize),
Op(String, String, char),
}
impl FromStr for Monkey {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.split_whitespace().collect::<Vec<_>>().as_slice() {
[name, num] => Ok(Self {
name: name.trim_end_matches(":").to_string(),
action: Action::Shout(num.parse()?),
}),
[name, mon1, op, mon2] => {
if op.len() == 1 {
Ok(Self {
name: name.trim_end_matches(":").to_string(),
action: Action::Op(
mon1.to_string(),
mon2.to_string(),
op.chars().next().ok_or(Error::msg("empty op detected"))?,
),
})
} else {
Err(Error::msg("multi-char op detected"))
}
}
_ => Err(Error::msg("cannot parse monkey")),
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
This one was arguably the hardest one of this AOC. For part 1, I solved the problem by building a list of neighbours for each tile that is either free or a wall. You then simply follow the instruction string given. The most important function for building th eneighbour map for part 1 searches the map from an edge along a line to find the first tile that is free or a wall.
Part 2 coul dhave been solved by hard-coding the neighbour relations between edges, but I didn’t want to do that, which is one of the reasons I finished this one last and rather late. Two main ideas come into play here:
Regarding rotation: Assuming you have a correct neighbour map, you can then easily determine the new rotation after a move by looking at the previous location. If that location is "up" from your current one, then you are now facing down, irrespective of any previous heading (holds similarly for the other directions).
Regarding the neighbour map: I was wondering how to find out which points are neighbours to which ones. As a human, I would fold up the map and construct a cube, which is what I did here, too. Because the map does not contain any cuts between neighbouring 50x50 patches (aka cube faces in the flat map), we can make do with a simple folding procedure.
Identify points belonging to each cube face and convert to 3D. That conversion simply sets the z-coordinate to zero for now. Also remeber an "up" or "north" vector and a normal vector for each point.
Identify which cube faces neighbour which other faces.
For each face, determine which faces need to be folded when folding down the face to the right. This can be determined by a breadth-first search over all face neighbours that is blocked by the reference face. Do the same for faces that lie downwards.
Construct a transformation consisting of rotation and translation that describes the folding operation.
Fold all points in affected faces.
Rinse repeat.
Build the neighbour map considering that a folded neighbour’s normal vector is identical to the difference vector expected to that neighbour if it were in the same plane.
Once you have a neighbour map, you apply the very same algorithm as in part 2 with the addition of the aforementioned rotation fix. Also note that folded neighbours are identified by the very same location but with different normal vectors. Thus, the actual tile position could be found by moving a tile in the direction of its normal vector by half a unit.
The code looks horrible because I wanted to get it done and didn’t care about readability. Thus, I havent' included it here. Please feel free to check out the repo if you want to have a look.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
This feels like a variant of the famous game of life. The solution can easily be found by dilligently following the instructions given. One potential pitfall could be that the elves change the directions in which they prefer to move. I used an infinite iterator over a vector again and make sure to skip one element after each round. That’s basically it.
main.rs
use anyhow::{Context, Error, Result};
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet};
use std::iter::Cycle;
use std::vec::IntoIter;
// Constants.
// Play one round.
fn game_of_elves(
elves: &mut HashSet<data::Point>,
proposals: &mut Cycle<IntoIter<data::Point>>,
) -> bool {
// First half. Find out which elves might actually move by filtering out those that have no
// other elf anywhere around them.
let maybe_moving_elves = elves
.iter()
.filter(|el| el.env().into_iter().any(|env| elves.contains(&env)))
.map(|el| el.clone())
.collect::<HashSet<data::Point>>();
let mut props = HashMap::<data::Point, data::Point>::new();
// Each one of those elves proposes a spot. We propose up to 4 directions.
for _ in 0..4 {
let prop = proposals.next().expect("initnite proposals");
props.extend(
maybe_moving_elves
.iter()
// Only check those that haven't yet proposed something.
.filter(|el| !props.contains_key(el))
// Keep only those for whom this direction is clear.
.filter(|el| {
el.dir_env(&prop)
.into_iter()
.all(|env| !elves.contains(&env))
})
.map(|el| (el.clone(), el.add(&prop)))
.collect::<HashMap<_, _>>(),
);
}
let mut prop_count = HashMap::<data::Point, usize>::new();
for prop in props.values() {
if let Some(old) = prop_count.get_mut(&prop) {
*old += 1;
} else {
prop_count.insert(prop.clone(), 1);
}
}
let mut was_moved = false;
// Second half. Only update to those positions that have been suggested exactly once.
for (elf, prop) in props {
if prop_count.get(&prop).unwrap() == &1 {
elves.remove(&elf);
elves.insert(prop);
was_moved = true;
}
}
was_moved
}
fn solve(file: &str) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
// Also obtain max coords. Min coords are implicitly 0.
let (occ_map, _, _) = io::parse_chars_to_data::<data::Input>(file, "input", None, None)?;
let mut elves = occ_map
.into_iter()
.filter_map(|(pos, tile)| {
if tile == data::Input::Elf {
Some(pos)
} else {
None
}
})
.collect::<HashSet<_>>();
// This is our infinite stream of proposals.
let mut proposals = vec![data::N, data::S, data::W, data::E].into_iter().cycle();
// Consider 10 rounds for part 1.
for _ in 0..10 {
game_of_elves(&mut elves, &mut proposals);
// Discard one element.
proposals.next().expect("inf it");
}
let min_x = elves.iter().map(|el| el.x).min().expect("no min x");
let max_x = elves.iter().map(|el| el.x).max().expect("no max x");
let min_y = elves.iter().map(|el| el.y).min().expect("no min y");
let max_y = elves.iter().map(|el| el.y).max().expect("no max y");
let mut count = 0;
for x in min_x..=max_x {
for y in min_y..=max_y {
if !elves.contains(&data::Point::new(x, y)) {
count += 1;
}
}
}
println!("free field count is {}", count);
let mut round_count = 10;
while game_of_elves(&mut elves, &mut proposals) {
proposals.next().expect("inf it");
round_count += 1;
}
println!("final round count is {}", round_count + 1);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::str::FromStr;
#[derive(Debug, Hash, PartialEq, Eq, Clone)]
pub struct Point {
pub x: isize,
pub y: isize,
}
#[derive(Debug, PartialEq)]
pub enum Input {
Elf,
Free,
}
pub const N: Point = Point { x: 0, y: -1 };
pub const NE: Point = Point { x: 1, y: -1 };
pub const E: Point = Point { x: 1, y: 0 };
pub const SE: Point = Point { x: 1, y: 1 };
pub const S: Point = Point { x: 0, y: 1 };
pub const SW: Point = Point { x: -1, y: 1 };
pub const W: Point = Point { x: -1, y: 0 };
pub const NW: Point = Point { x: -1, y: -1 };
impl Point {
pub fn new(x: isize, y: isize) -> Self {
Self { x, y }
}
pub fn add(&self, other: &Self) -> Self {
Self {
x: self.x + other.x,
y: self.y + other.y,
}
}
pub fn env(&self) -> Vec<Self> {
vec![
self.add(&N),
self.add(&NE),
self.add(&E),
self.add(&SE),
self.add(&S),
self.add(&SW),
self.add(&W),
self.add(&NW),
]
}
pub fn dir_env(&self, dir: &Point) -> Vec<Self> {
match dir {
&N => vec![self.add(&NW), self.add(&N), self.add(&NE)],
&E => vec![self.add(&NE), self.add(&E), self.add(&SE)],
&S => vec![self.add(&SE), self.add(&S), self.add(&SW)],
&W => vec![self.add(&SW), self.add(&W), self.add(&NW)],
_ => panic!("we should never propose another direction"),
}
}
}
impl FromStr for Input {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"." => Ok(Self::Free),
"#" => Ok(Self::Elf),
_ => Err(Error::msg("cannot parse as tile")),
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
The code for this one looks horrible because I copied and modified an A*
algorithm created for a previous day.
I basically used A*
for finding the path but in 3 dimentions, with 2 spatial
ones and the third one being time.
Time can only increase and the list of viable next locations is generated on the
fly based on future blizzard positions.
main.rs
use anyhow::{Context, Error, Result};
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet};
// Constants.
fn find_path<'a>(
start: &'a data::Node,
end: &'a data::Node,
graph: &'a HashSet<data::Node>,
blizz: &'a Vec<data::Blizzard>,
estimator_fn: fn(&data::Node, &data::Point) -> usize,
start_time: u16,
) -> Result<HashMap<data::Node, (Option<data::Point>, usize)>> {
let ref_point = end.pos();
let estimator = move |node: &data::Node| estimator_fn(node, &ref_point);
let get_node = move |node: &data::Point| {
graph
.get(&node.as_node(Some(0)))
.ok_or(Error::msg("node not found"))
.map(|el| el.shift(node.t))
};
let mut connections = HashMap::<data::Node, (Option<data::Point>, usize)>::new();
let mut checkable = HashMap::<data::Node, (data::Point, usize)>::new();
let start_at_time = start.shift(start_time);
// Add starting point to resulting path.
connections.insert(start_at_time.clone(), (None, 0));
// Add neighbours of starting point to list of checkable values. Ignore neighbouring points
// that are not part of the graph.
for neigh in start_at_time.neighbours(blizz, graph) {
let neigh_node = get_node(&neigh)?;
// Estimated costs are the most direct possible connection plus 1, since every step costs
// one.
let estimate = estimator(&neigh_node);
checkable.insert(neigh_node, (start_at_time.pos(), estimate));
// connections.insert(neigh_node, (Some(start_at_time.pos()), 1));
}
// Search until we added the final node to the path or until there is nothing more to check.
while !connections.contains_key(end) && checkable.len() > 0 {
// Get node with minimum _estimated_ cost.
let next_best_node = checkable
.iter_mut()
// Get node with minimum estimated cost.
.min_by(|(_node1, (_pre1, cost1)), (_node2, (_pre2, cost2))| cost1.cmp(&cost2))
.ok_or(Error::msg("cannot find next node"))?
.0
.clone();
let (_, (predecessor, _old_estimate)) = checkable
.remove_entry(&next_best_node)
.ok_or(Error::msg("cannot find predecessor"))?;
let cost_of_predecessor = connections
.get(&predecessor.as_node(None))
.ok_or(Error::msg("predecessor has not been visited"))?
.1;
// Add point to resulting path unless we've found the end/
connections.insert(
next_best_node.clone(),
(Some(predecessor), cost_of_predecessor + 1),
);
// We've found the end. Add it in a hacky way.
if estimator(&next_best_node) == 0 {
connections.insert(
next_best_node.shift(end.p.t),
(Some(predecessor), cost_of_predecessor + 1),
);
}
// Add neighbours of point to list of checkable values.
for neigh in next_best_node.neighbours(blizz, graph) {
let neigh_node = get_node(&neigh)?;
if !connections.contains_key(&neigh_node) {
let estimate = cost_of_predecessor + estimator(&neigh_node);
let previous_best = checkable
.get(&neigh_node)
.unwrap_or(&(neigh, std::usize::MAX))
.1;
if previous_best > estimate {
checkable.insert(neigh_node, (next_best_node.pos(), estimate));
}
// connections.insert(neigh_node, Some(start.pos()));
}
}
}
Ok(connections)
}
fn render<'a>(
time: u16,
graph: &'a HashSet<data::Node>,
blizzards: &'a Vec<data::Blizzard>,
min: &'a data::Point,
max: &'a data::Point,
) {
let blizz = blizzards
.iter()
// This is a hacky way of moving the blizzard to a location but still retrieving its
// location in a way that can easily be queried against the graph.
.map(|el| el.at_time(time).as_node(Some(0)).p)
.collect::<Vec<_>>();
let mut blizz_count = HashMap::<data::Point, u8>::with_capacity(blizz.len());
let mut blizz_last = HashMap::<data::Point, &data::Blizzard>::with_capacity(blizz.len());
for (p, b) in blizz.iter().zip(blizzards.iter()) {
let curr_count = blizz_count.get(p).unwrap_or(&0);
blizz_count.insert(p.clone(), curr_count + 1);
blizz_last.insert(p.clone(), b);
}
for y in min.y..=max.y {
for x in min.x..=max.x {
let p = data::Point::new(x as isize, y as isize, 0);
if let Some(_) = graph.get(&p.as_node(None)) {
// This is a potential free spot. Check whether it's a blizzard or not.
if let Some(c) = blizz_count.get(&p) {
if c == &1 {
// Only one blizzard, print its direction.
print!("{}", blizz_last.get(&p).unwrap().as_char());
} else {
// Otherwise, print the count.
print!("{}", c);
}
} else {
// This is a free spot.
print!(".");
}
} else {
// This is a wall spot.
print!("#");
}
}
println!("");
}
// Blobk until receiving return.
std::io::stdin().lines().next();
}
fn solve(file: &str) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
// Also obtain max coords. Min coords are implicitly (1,1) for free spaces..
let (occ_map, max_x, max_y) = io::parse_chars_to_data::<data::Tile>(file, "tile", None, None)?;
let max = data::Point::new(max_x, max_y, 0);
let min = data::Point::new(0, 0, 0);
let min_free = data::Point::new(1, 1, 0);
let start = occ_map
.iter()
.find(|(point, tile)| point.y == 0 && tile == &&data::Tile::Free)
.ok_or(Error::msg("cannot find start"))?
.0
.clone()
.as_node(None);
let end = occ_map
.iter()
.find(|(point, tile)| point.y == max.y && tile == &&data::Tile::Free)
.ok_or(Error::msg("cannot find end"))?
.0
.clone()
.as_node(None);
// This isn't really a graph but we call it that because that's the name used in the other days
// where the same path finding algorithm has been used.
let graph = occ_map
.iter()
.filter_map(|(point, tile)| {
if tile != &data::Tile::Wall {
Some(point.as_node(None))
} else {
None
}
})
.collect::<HashSet<_>>();
let blizzards = occ_map
.iter()
.filter_map(|(point, tile)| {
if let data::Tile::Blizzard(dir) = tile {
Some(data::Blizzard::new(
point.clone(),
dir.clone(),
min_free,
max,
))
} else {
None
}
})
.collect::<Vec<_>>();
// for time in 0..10 {
// render(time, &graph, &blizzards, &min, &max);
// }
let estimator = |node: &data::Node, point: &data::Point| node.infinity_dist(&point);
let path = find_path(&start, &end, &graph, &blizzards, estimator, 0)?;
let mut max_time = path
.iter()
.map(|el| el.0.p.t)
.max()
.ok_or(Error::msg("cannot determine path length"))?;
println!("reached goal in {} steps", max_time);
// Part 2.
// Go back to the start.
let path = find_path(&end, &start, &graph, &blizzards, estimator, max_time)?;
max_time = path
.iter()
.map(|el| el.0.p.t)
.max()
.ok_or(Error::msg("cannot determine path length"))?;
println!("reached start again in {} steps", max_time);
// Go back to the end.
let path = find_path(&start, &end, &graph, &blizzards, estimator, max_time)?;
max_time = path
.iter()
.map(|el| el.0.p.t)
.max()
.ok_or(Error::msg("cannot determine path length"))?;
println!("reached goal again in {} steps", max_time);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::collections::HashSet;
use std::hash::{Hash, Hasher};
use std::str::FromStr;
#[derive(Debug, Hash, PartialEq, Eq, Clone, Copy)]
pub struct Point {
pub x: u8,
pub y: u8,
pub t: u16,
}
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum Direction {
Up,
Down,
Left,
Right,
}
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum Tile {
Free,
Wall,
Blizzard(Direction),
}
pub struct Blizzard {
start: Point,
direction: Direction,
min_x: u8,
min_y: u8,
max_y: u16,
max_x: u16,
}
impl Blizzard {
pub fn new(start: Point, direction: Direction, min: Point, max: Point) -> Self {
Self {
start: Point {
x: start.x - min.x,
y: start.y - min.y,
t: 0,
},
direction,
min_x: min.x,
min_y: min.y,
max_x: (max.x - min.x) as u16,
max_y: (max.y - min.y) as u16,
}
}
pub fn at_time(&self, t: u16) -> Point {
match self.direction {
Direction::Up => Point {
x: self.start.x + self.min_x,
y: ((self.start.y as u16 + self.max_y - t % self.max_y) % self.max_y) as u8
+ self.min_y,
t,
},
Direction::Down => Point {
x: self.start.x + self.min_x,
y: ((self.start.y as u16 + t % self.max_y) % self.max_y) as u8 + self.min_y,
t,
},
Direction::Left => Point {
x: ((self.start.x as u16 + self.max_x - t % self.max_x) % self.max_x) as u8
+ self.min_x,
y: self.start.y + self.min_y,
t,
},
Direction::Right => Point {
x: ((self.start.x as u16 + t % self.max_x) % self.max_x) as u8 + self.min_x,
y: self.start.y + self.min_y,
t,
},
}
}
pub fn as_char(&self) -> char {
match self.direction {
Direction::Up => '^',
Direction::Down => 'v',
Direction::Left => '<',
Direction::Right => '>',
}
}
}
impl Point {
pub fn new(x: isize, y: isize, t: isize) -> Self {
Self {
x: x as u8,
y: y as u8,
t: t as u16,
}
}
pub fn add(&self, other: &Self) -> Self {
Self {
x: self.x + other.x,
y: self.y + other.y,
t: self.t,
}
}
pub fn env(&self) -> Vec<Self> {
let mut result = Vec::<_>::with_capacity(5);
result.push(
// Waiting.
Self {
x: self.x,
y: self.y,
t: self.t + 1,
},
);
if self.x > 0 {
result.push(
// Moving left.
Self {
x: self.x - 1,
y: self.y,
t: self.t + 1,
},
);
}
result.push(
// Moving right.
Self {
x: self.x + 1,
y: self.y,
t: self.t + 1,
},
);
if self.y > 0 {
result.push(
// Moving up.
Self {
x: self.x,
y: self.y - 1,
t: self.t + 1,
},
);
}
result.push(
// Moving down.
Self {
x: self.x,
y: self.y + 1,
t: self.t + 1,
},
);
result
}
pub fn as_node(&self, time: Option<u16>) -> Node {
let mut result = Node { p: *self };
if let Some(t) = time {
result.p.t = t
}
result
}
}
#[derive(Debug, Clone)]
pub struct Node {
pub p: Point,
}
impl FromStr for Direction {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"^" => Ok(Direction::Up),
"v" => Ok(Direction::Down),
"<" => Ok(Direction::Left),
">" => Ok(Direction::Right),
_ => Err(Error::msg("cannot parse as tile")),
}
}
}
impl FromStr for Tile {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"." => Ok(Self::Free),
"#" => Ok(Self::Wall),
_ => Ok(Self::Blizzard(s.parse()?)),
}
}
}
impl Node {
pub fn pos(&self) -> Point {
self.p
}
pub fn neighbours<'a>(
&'a self,
blizz: &Vec<Blizzard>,
spaces: &HashSet<Node>,
) -> HashSet<Point> {
let occupied = blizz
.iter()
.map(|el| el.at_time(self.p.t + 1))
.collect::<HashSet<_>>();
// Neighbours are points in the environment that are not occupied in the next turn.
self.p
.env()
.into_iter()
.filter(|el| !occupied.contains(el) && spaces.contains(&el.as_node(Some(0))))
.collect::<HashSet<_>>()
}
pub fn infinity_dist(&self, other: &Point) -> usize {
let x_diff = if other.x > self.p.x {
other.x - self.p.x
} else {
self.p.x - other.x
} as usize;
let y_diff = if other.y > self.p.y {
other.y - self.p.y
} else {
self.p.y - other.y
} as usize;
x_diff + y_diff
}
pub fn shift(&self, time: u16) -> Node {
Self {
p: Point {
x: self.p.x,
y: self.p.y,
t: time,
},
}
}
}
// We identify a node only by its position.
impl Hash for Node {
fn hash<H: Hasher>(&self, state: &mut H) {
self.p.hash(state)
}
}
impl PartialEq for Node {
fn eq(&self, other: &Self) -> bool {
self.p == other.p
}
}
impl Eq for Node {}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.
This is my implementation for both rounds of today’s puzzle. This year, I am using Rust to solve the challenges. My first priority is learning more about the language, while the creation of easily readable code is a second priority, this time. Wiriting fast code is a far-away third priority.
All source code lives in the src
directory.
This solution contains a main.rs
, which defines the main executable.
There is also a data.rs
, which specifies data types important for this
solution as well as associated functions, methods, and traits.
There is also an io.rs
, which contains helper functions related to input
parsing and output processing.
This was nice in Rust by implementing two traints, FromStr
for parsing a SNAFU
into an integer and then Display
for converting it to string.
Parsing is straightforward.
The basic idea for displaying was to first convert to ordinary bsae 5 numbers
and then process those from least significant to most significant.
Then, for example, a 3
at one position can be replaced by a =
at the same
position followed by incrementing the next digit by one.
The numbers 4 and 5 can be handled similarly.
main.rs
use anyhow::Result;
fn solve(file: &str) -> Result<()> {
println!("PROCESSING {}", file);
// Read file and convert into data.
let snafus = io::parse_chunks_to_data::<data::Snafu>(
io::read_lines_from_file(file, 1)?,
"snafu",
None,
None,
)?;
for snafu in &snafus {
println!("{} -> {}", snafu.dec(), snafu);
}
let sum = snafus
.into_iter()
.fold(data::Snafu::new(0), |acc, val| acc.add(&val));
println!("sum is {} -> {}", sum.dec(), sum);
Ok(())
}
fn main() -> Result<()> {
solve(SAMPLE1)?;
solve(REAL)?;
Ok(())
}
data.rs
use anyhow::{Error, Result};
use std::fmt;
use std::str::FromStr;
#[derive(Debug, Hash, PartialEq, Eq, Clone)]
pub struct Snafu(isize);
impl FromStr for Snafu {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let mut num = 0;
let mut place = 1;
for c in s.trim().chars().rev() {
num += place
* match c {
'-' => -1,
'=' => -2,
'0' | '1' | '2' => String::from(c).parse::<isize>()?,
_ => return Err(Error::msg("unknown char in snafu conversion")),
};
place *= 5;
}
Ok(Self(num))
}
}
impl Snafu {
pub fn dec(&self) -> isize {
self.0
}
pub fn new(val: isize) -> Self {
Self(val)
}
pub fn add(&self, other: &Self) -> Self {
Self(self.0 + other.0)
}
fn to_str(&self) -> Result<String> {
let mut digits = vec![];
// Convert to actual base 5 first.
let mut num = self.0;
while num > 0 {
digits.push(num % 5);
num /= 5;
}
// Now make the conversion to snafu. Do that by replacing the digits 3, 4 and 5 by their
// snafu equivalents. Do so as long as there have been incorrect digits here.
for idx in 0..digits.len() {
match digits[idx] {
dig @ 3 | dig @ 4 | dig @ 5 => {
digits[idx] = dig - 5;
if digits.len() - 1 < idx + 1 {
digits.push(0);
}
digits[idx + 1] += 1;
}
// Don't do anything.
_ => {}
}
}
let mut has_err = false;
// Now create the printable representation.
let str_rep = digits
.into_iter()
.rev()
.map(|el| match el {
0 | 1 | 2 => format!("{}", el),
-1 => format!("-"),
-2 => format!("="),
_ => {
has_err = true;
String::new()
}
})
.collect::<Vec<_>>()
.join("");
if has_err {
Err(Error::msg("cannot print snafu"))
} else {
Ok(format!("{}", str_rep))
}
}
}
impl fmt::Display for Snafu {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if let Ok(val) = self.to_str() {
write!(f, "{}", val)
} else {
Err(fmt::Error)
}
}
}
There are no tests this time.
Please have a look at src/main.rs
for expeced names of input files.
Assuming the expected files are present, you only need to execute cargo run
to
run the solution.
The expected input files are ususally called sample.dat
and stage_1.dat
.