Re: p2
Definitely a bit tedious, I had to "play" a whole session to spot bugs that I had. It took me far longer than average. I had buggy dissepearing boxes because of update order, I would reccomend a basic test case of pushing a line/pyramid of boxes in every direction.
Updated Reasoning
Ok it probably works because it isn't bang center but a bit up of center, most other steps most be half half noise vertically, and the reason it doesn;t minimize on an earlier horizontal step (where every step is mostly half half), is because the middle points on the trunk, that don't contribute to the overall product therefore minimizing it even lower.
Day 14, got very lucky on this one, but too tired to think about why part 2 still worked.
spoiler
#!/usr/bin/env jq -n -R -f
# Board size # Our list of robots positions and speed #
[101,103] as [$W,$H] | [ inputs | [scan("-?\\d+")|tonumber] ] |
# Making the assumption that the easter egg occurs when #
# When the quandrant product is minimized #
def sig:
reduce .[] as [$x,$y] ([];
if $x < ($W/2|floor) and $y < ($H/2|floor) then
.[0] += 1
elif $x < ($W/2|floor) and $y > ($H/2|floor) then
.[1] += 1
elif $x > ($W/2|floor) and $y < ($H/2|floor) then
.[2] += 1
elif $x > ($W/2|floor) and $y > ($H/2|floor) then
.[3] += 1
end
) | .[0] * .[1] * .[2] * .[3];
# Only checking for up to W * H seconds #
# There might be more clever things to do, to first check #
# vertical and horizontal alignement separately #
reduce range($W*$H) as $s ({ b: ., bmin: ., min: sig, smin: 0};
.b |= (map(.[2:4] as $v | .[0:2] |= (
[.,[$W,$H],$v] | transpose | map(add)
| .[0] %= $W | .[1] %= $H
)))
| (.b|sig) as $sig |
if $sig < .min then
.min = $sig | .bmin = .b | .smin = $s
end | debug($s)
)
| debug(
# Contrary to original hypothesis that the easter egg #
# happens in one of the quandrants, it occurs almost bang #
# in the center, but this is still somehow the min product #
reduce .bmin[] as [$x,$y] ([range($H)| [range($W)| " "]];
.[$y][$x] = "█"
) |
.[] | add
)
| .smin + 1 # Our easter egg step
And a bonus tree:
Not surprised, still very disappointed, I feel sick.
Quinn enters the dark and cold forest, crossing the threshold, an omnipresent sense of foreboding permeates the air, before being killed by a grue.
“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “
Also this doesn't give enough credit to gradeschoolers. I certainly don't think I am much smarter (if at all) than when I was a kid. Don't these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? I'm not sure if it's me being the weird one, to me growing up is not about becoming smarter, it's more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.
Hi, I'm going to be that OTHER guy:
Thank god not all dictionaries are prescriptivists and simply reflect the natural usage: Cambridge dictionary: Beg the question
On a side rant "begging the question" is a terrible name for this bias, and the very wikipedia page you've been so kind to offer provides the much more transparent "assuming the conclusion".
If you absolutely wanted to translate from the original latin/greek (petitio principii/τὸ ἐν ἀρχῇ αἰτεῖσθαι): "beginning with an ask", where ask = assumption of the premise. [Which happens to also be more transparent]
Just because we've inherited terrible translations does not mean we should seek to perpetuate them though sheer cultural inertia, and much less chastise others when using the much more natural meaning of the words "beg the question". [I have to wonder if begging here is somehow a corruption of "begin" but I can't find sources to back this up, and don't want to waste too much time looking]
I feel mildly better, thanks.
Not every rationalist I've met has been nice or smart ^^.
I think it's hard to grow up in our society, without harboring a kernel of fascism in our hearts, it's easy to fall into the constantly sold "everything would work better if we just put the right people in charge". With varying definitions of who the "right people" are:
- Racism
- Eugenics
- Benevolent AI
- Fellow tribe,
- The enlightened who can read "the will of the people" or who are able to "carve reality at the joints"
- Some brands of "sovereign citizen" or corporate libertarianism (I'm the best person in charge of me!).
- The positivist invokers of ScientificProgress™
Do they deserve better? Absolutely, but you can't remove their agency, they ultimately chose this. The world is messy and broken, it's fine not to make too much peace with that, but you have to ponder your ends and your means more thoughtfully than a lot of EAs/Rationalists do. Falling prey to magical thinking is a choice, and/or a bias you can overcome (Which I find extremely ironic given the bias correction advertising in Rationalists spheres)
It makes you wonder about the specifics:
- Did the 1.5 workers assigned for each car mostly handle issues with the same cars?
- Was it a big random pool?
- Or did each worker have their geographic area with known issues ?
Maybe they could have solved context issues and possible latency issues by seating the workers in the cars, and for extra quick intervention speed put them in the driver's seat. Revolutionary. (Shamelessly stealing adam something's joke format about trains)
Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.
I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.
But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨
The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.
~~Brawndo~~ Blockchain has got what ~~plants~~ LLMs crave, it's got ~~electrolytes~~ ledgers.
EDIT: I have a sneaking suspicion that the computer will need to be re-used since the combo-operand 7 does not occur and is "reserved".
re p2
Also did this by hand to get my precious gold star, but then actually went back and implemented it Some JQ extension required: