While a lot of the Urbit stuff seems, uh, unreasonable, the idea of an OS of sorts written for a mathematically specified machine model, where the OS shouldn’t change much, and where one could transfer this from computer to computer as long as there is an emulator for the machine model written for the particular hardware, seems fairly appealing.

The need to separately re-establish all of one’s programs and settings when moving to a new device, seems like a waste, and like it shouldn’t be that way. (Of course, there are VMs which can emulate some other physically existing hardware, but then you have all these different possible emulated hardwares, and the… well, it seems like there is more complexity that way, and more edge-cases where some aspects of the hardware might not be emulated quite correctly)

But the specific stack of 3 weird programming languages that the project uses, with their deliberately weird-sounding names for things, ugh.

Why?

Also, I feel like, ideally, a project for such an “eternal OS” would use formal methods to make machine-verifiable proofs of various properties of the OS, like what seL4 has.

Also, probably the machine model should resemble real hardware at least a little bit more than the one Urbit uses, does.

locally-normal:

geometriclogician:

regexkind:

geometriclogician:

We need a standard symbol for denoting the binary relation of two sets being disjoint or non-disjoint, just as we have for subsets. Writing A ∩ B = or A ∩ B ≠ makes me sad each time I have to, and I’m getting too old to waste precious emotional energy on this nonsense.

What about

Forall x, A x -> B x -> False

Exists x, A x /\ B x

I’ve debated maybe writing A ∩ B with a slash through the ∩ part to indicate disjointness, or maybe a = with a circle between the lines for non-disjointness, but I’m not satisfied with either.

There’s \pitchfork for the first one, A⋔B. But I’m not happy with it either.

How’s about this poorly drawn depiction. It would be a bit of a pain in the ass to write neatly, but if someone latex’d it nicely it could be handy.

image

How about we use A \cap B to stand for (A \cap B) \ne \emptyset , and, idk, A \not\cap B for A \cap B = \emptyset ?

intimate-mirror:

raginrayguns:

You can fit a utility function to a preference ordering, but not to a finite preference ordering. (EDIT: I mean a finite preference ordering doesn’t determine a unique utility function) Everyone’s utility function is undefined by the choices they’ve made so far. It’s always possible in the future to make choices that change which utility function is the best fit to your choices, including your past choices. So what you really wanted, why you really did something, is a statement about an infinite sequence of future actions. That puts it in the same kind of statement as stuff like the godel string, or consistency of a formal system. Claiming someone has a particular utility function is a claim that infinite future choices will not deviate from it, just as consistency is a claim that a proof search will never find an inconsistency. So it’s the kind of thing where you can have incompleteness or undecidability.

HUmans are trivially undecidable in a certain sense in tha tthey can emulate Turing machines and so there’s no program that solves the halting problem for a human emulating a Turing machine. But still it’s interesting to apply this perspective to a human’s “revealed preferences”. Not that you can’t infer someone’s preferences, it’s definitely an approachable modeling problem, but you can’t ever achieve certainty here, they can always just like, start behaving in a different way if a certain turing machine halts, that makes their past actions part of a different pattern, and therefore you fit different “revealed preferences” to it.

And I think if you apply that perspective to yourself… the point I think is to give up on the idea that you have a true utility function you need to discover. Like if you want to limit yourself and turn yourself into basically a chess-playing program, sure, you can fit something to your past actions and feelings and commit to following it forever. But that’s not discovering your true utility function, that’s making an infinite sequence of future choices by a certain method, and doing that is what makes it your true utility function… except that’s never complete, cause you can stop at any time.

one of the most important facts about the halting problem is that while you can’t write a program that determines for all programs whether they halt, you absolutely can prove that a giant swathe of programs halts

if you do introspection, like program analysis, you can figure out features of your own mind, hypothetically* including a utility function. and minds are finite, so there’s no theoretical thing stopping this from happening

a large part of this post boils down to a pseudo-technical emotional argument that life is meaningless if it doesn’t include infinite chains of regress

i guess this belief makes sense to a bunch of people, but if you’re trying to persuade, “output doesn’t uniquely determine program” isn’t relevant and “if you’re describable then you’re as inhuman as deep blue” isn’t helpful as an argument

1) To elaborate on that “minds are finite” comment: insofar as human behavior can be described in a materialist way, then, assuming currently fairly-accepted ideas about physics and information content, humans have (or at least can be effectively modeled as having) finite information capacity, and therefore facts about their behavior aren’t strictly speaking “undecidable”, as it could simply be looked up in an enormous lookup table (where the lookup table includes external influences as part of the keys when looking things up).

Of course, that amount of information is bonkers huge, so maybe one could make a variation on the “human behavior is undecidable” claim which is actually true (using e.g. an appropriate limitation on the size of the TM that is supposed to do the deciding (appropriate in light of the information capacity of a human))

2) But yes, as mentioned, people don’t actually seem to actually optimize for a utility function in any super-meaningful way (of course, if the utility function is a function of actions and not just outcomes, then *some* utility function will be optimized by any history of actions, but that’s why I said “in any super-meaningful way”).

3) And, I agree that if people did have utility functions, there doesn’t appear to be anything fundamental preventing knowing it, and - well…

(no longer numbered)
It’s unclear to me whether the fact that we don’t appear to quite match with a utility function (in a meaningful way) is or isn’t important for any of the like, existentially meaningful questions.

But, as you mention, the family of hypothetical questions of the form “what would one do given choice X” would certainly be enough to determine one’s utility function, if one had one, to any degree of accuracy, and I don’t think there being a fact-of-the-matter as to “what one would choose given choice X (or, in scenario X)” for all choices(scenarios) X, would be any problem for meaning, and this is the kind of thing a utility function would encode, so, the answers to these questions would specify a utility function if one had one, and visa versa, and so I think this family of answers could be regarded as a very loose generalization of a utility function?
(I’m allowing for facts-of-the-matter as to ”what one would choose” to include things like probabilistic mixes of different responses)
And, continuing to act according to such a generalization-of-a-utility-function is not something that makes one less of a person, so much as just, basically a tautology? One will do as one would do if one were to be in the situations one will be in.

Would comprehending some information which entirely encodes this information for oneself, in any way inherently conflict with any important personhood-things ?
This isn’t entirely clear to me, but I feel like the answer is probably “no”?

I’m a little bit reminded of a scene I saw screenshots of from “Adventure Time”, wherein a character remarks on how they know precisely what it is that they want, and, while that character was like, evil and such, I don’t think them understanding themself in that way seemed like any kind of obstruction to their personhood..? Though, “fictional evidence isn’t”, so perhaps that’s just irrelevant.

learn-tilde-ath asked:

So, seeing as sundials in the southern hemisphere go counterclockwise, I guess there's even less reason in the southern hemisphere for clocks to go clockwise? I guess countries in the southern hemisphere just got stuck with the northern hemisphere's convention that was set for clocks, of going the wrong way around (clockwise) instead of the correct forward direction for angles (ccw)?

argumate:

what should clocks do on the equator, just tremble in terror?

No, I’m saying they should always go in the direction we currently call “counterclockwise”, in both the southern and northern hemispheres (and on the equator), and the reasonable-but-insufficient justification of “but clockwise is the way sundials go” only works in the northern hemisphere.

They should all go ccw because that’s the direction angles go.

argumate:

argumate:

learn-tilde-ath said: yeah, but what if a universe/laws-of-physics where arbitrarily large amounts of information can be stored in a fixed amount of space? In such a universe, perhaps there could be true immortality.

one could imagine Conway’s Life where every cell can be subdivided into smaller games of Life to arbitrary depth, allowing a “fixed size” entity to approach infinite computation asymptotically, but the entity is still forced to “grow” or die, either by terminating or repeating previous states; it might not count as death to gradually transform into an entity that is causally linked with your previous self but utterly unrecognisable to it but it definitely sounds deathish.

learn-tilde-ath said: idk, keeping the same patterns of structure in the larger cells, with the patterns in the larger cells managing and whatnot the subcells, doesn’t seem particularly “unrecognizeable”. Sure, it is more complex, but still has the same overall structure.

I mean you could cheat and have an infinite memory that you very rarely access (it’s infinite, so each bit would be accessed zero times on average!) and that would be sufficient to make you immortal in the sense of never repeating any states, simply by incrementing your age counter repeatedly.

from the outside you would look like a normal human, you would just be dragging around this six dimensional galaxy brain on an invisible string.

For some reason I didn’t respond to this at the time (Aside: I really wish I could set the blog I actually post with to be my “main blog”).
And, yes, we could imagine a setup where there’s the “main thing” which has finitely many states and just have a counter, but, like,
why?
It isn’t like e.g. sorting algorithms (or whatever other kind of algorithm) are restricted to only working sensibly on lists of finite length.

I don’t see why whatever process is responsible for human reasoning couldn’t be scaled up to larger context sizes without there being at some point a fundamental break in the nature of the being there.

I guess you are imagining, in the “become unrecognizable” you are thinking of the “recognizable” part as being like, a (very large) finite state machine which is acting like the state machine of a Turing machine head as it acts on an infinite tape, but, why not instead imagine the person normally as being more analogous to the process including what happens on the tape, where the tape has a fixed limited size (responsible for e.g. working memory, short term memory, and long term memory, all having finite capacity) (where this limited size of the tape is much larger than the number of possible states of the head), and then just, imagine what happens if you remove the cap on the size of the tape.

In principle (and not just in this world with physics as it appears to be), what should make it impossible for someone to enjoy some chocolate that reminds them of a childhood memory, while they are also working on factorizing 3^^^^^3 + 7 ?

Like, what’s the obstacle?

argumate:

raginrayguns:

it’s fun to talk about the kelly criterion and i guess we’re doing it cause of stuff sbf and wo said about it but im skeptical it’s related to the downfall of alameda

like you can do some model-based calculation that if you exceed the kelly bets you’re putting yourself at risk of losing everything

but when i read about the subprime mortgage crisis and about the fall of LTCM, their risk models were wrong. They failed in ways that the model said was impossible. So it’s not about the model-based safety of the kelly criterion vs the model-based risk of exceeding it rly i dont think

you can calculate really low risk based on like the standard 1/√n decay of variation when averaging n uncorrelated things. But then the normally-uncorrelated stuff falls together in a market downturn

I wouldn’t have much confidence generalizing from anecdotes, but alan greenspan says in his memoirs he thinks this is like the biggest issue, that risk models need to explicitly model two “phases”, so i get some confidence from someone who knows much more than me saying something similar

so when i see some crypto traders go under during a crypto crash? And general downturn cause of a money vacuum? I don’t think this is because of some huge risk they knowingly accepted, I would guess it’s because their risk models they used to make a lot of money back when money was free told them there was no chance of losing all their money

yeah, Kelly Criterion only applies when you know the odds! and you only know the odds when you’re at a literal casino, where the odds are rigged against you and the Kelly Criterion says not to bet!

you don’t need fancy maths to avoid an FTX-style blowup, just avoid doing things like “stealing from investors” and “stealing from customers” and “failing to write down where the money is” and “hiring people who may not actually exist” and “spending billions of dollars on magic beans” and

I’m unsure about “Kelly Criterion only applies when you know the odds”.

Like, sure, “exactly the Kelly Criterion”, probably only makes sense within the assumption that you know what the odds are I guess.

But suppose you are playing a variant of the “multi armed bandit” game, where you can choose how much to bet on each pull of an arm (with a certain minimum bet size I guess).
I would expect some kind of variant of Kelly to apply here?
Of course, in that case, I guess to define that game you would have to assume some probability distribution over what each of the arms has for its payout distribution…
But, perhaps one could look at like, some sort of “worst case” for what that probability distribution might be? Uh… I’m not really sure of that.
Well, suppose you are required to make all N pulls even if it seems likely that all the arms have negative expected return, so that, assuming that some of the arms are better, uh, you would be likely to eventually find this out, and therefore ruling out some of the distributions from the “worst case compatible with what is observed so far” ?
idk.

(and, of course, I am considering this variant on the multi armed bandit here as potentially working as a metaphor for other decisions in life and such.)

argumate:

cryptocurrency does at least have a simple value proposition: it’s a speculative asset / ponzi scheme and so there’s a ready supply of people willing to exchange real money for it in the hope of getting rich / selling it to some other sucker later, which keeps the whole ecosystem growing (and collapsing, and growing again and then collapsing, and so on).

but if a debt trading market is “real money done better but without the backing of a sovereign government”, what’s the use case for it that drives adoption?

you can’t use it to pay your taxes, you might use it to avoid taxes but that’s obviously a lightning rod for trouble, and while you would happily use it to buy things there’s no situation in which you would rather accept it over real money when selling things unless those things are illegal or socially sanctioned in some way (drugs, sex, assassinations, and so on) such that regular payment providers keep their distance.

so you can seed a crypto trading market by taking real money and giving away magic beans but you would need to seed a debt trading market by doing the exact opposite: giving people goods and services worth real money in exchange for magic beans from them, which is obviously a less appealing proposition for an investor.

(now you could say that some venture capitalists are indirectly doing this, for example subsidising grocery delivery services that lose money in the hope of building market share for some future enterprise that might be worth something one day, but that seems more like a semi-fraudulent quirk of the low interest rate environment rather than any kind of sound business plan).

one possibility is to boostrap the market by selling intangible things such that early adopters aren’t running the risk of bankruptcy, for example you could “sell” a web comic or podcast in which case there’s a progression from people giving you “likes” and other forms of attention that cannot be redeemed for cash and people subscribing to your patreon or hitting your tip jar, where perhaps debt trading could offer a smooth gradient across that spectrum from “I owe you a notional favour” to “I am literally buying you a coffee”.

but it does seem like that use case could be achieved by simpler methods, like the clever infrastructure isn’t really contributing much to this specific example and is only there in the opportunistic hope that the market would grow into something more than that (without being exploited for criminal ends or promptly shut down by the SEC).

I think one idea I’ve seen which is tangentially related, is the idea of, automatically finding cycles in wants, like, [“X would be willing to provide A in exchange for B” “Y would be willing to provide B in exchange for C”, “Z would be willing to provide C in exchange for A”] (except on a much larger scale and without necessarily being single cycles, but multiple interconnected cycles)

Perhaps if one started with some kinds of digital goods, like, art commissions, uh, recommendations/advertisements, tutoring services, and other things which could be initially done at a non-professional level over the internet, it could be possible? And then like, if you didn’t require all the chains to be closed before completing a cycle, you could create representations of debts?

Like, if you start with just a system to facilitate barter, and then only later add in some more abstract unit(s) of account (the debts), and then significantly later some people began to exchange this unit/these units of account for like, government-endorsed-currencies ?

idk

theevenprime:

big-block-of-cheese-day:

argumate:

you would expect female CEOs and politicians to be as power hungry and sociopathic as male politicians and CEOs even if women on average were not, simply because of selection effects on what is a very small group.

You would also expect hunger for power and sociopathy to manifest differently.

The original @argumate argument isn’t true, though. The distributions

f(x)={c_1*N(mu, sigma^2)(x), x>=s, 0 else}

g(x)={c_2*N(mu, 2*sigma^2)(x), x>=s, 0 else}

(for c_1, c_2 appropriate normalizing constants)

look very different.

So too do the truncated density functions for even slightly different means.

What if instead of conditioning on x >= s, you instead take like, a million samples and take the 50 largest values?

At the last, if you took 5 million samples from both distributions, and then took the 100 largest values from the union of those two sets, then (assuming at least some of the values came from from the distribution which tends to produce smaller outputs), these 100 smallest values will probably be all pretty close to one-another (even though the ones that came from the tends-to-be-smaller distribution probably tend to be on the lower end of the 100)

Anonymous asked:

it's true that political assassination is bad, but it also seems sort of ghoulish to spell it out in the context of war. it's like, killing "soldiers"? fine, good, do it as much as you want! killing random civilians who were resisting, or just living in a city you bombed? eh well it's bad but what did you think would happen, it's war. killing a SPECIFIC person who you were specifically targeting because they're important? how dare you. "people" vs "faceless mooks"

argumate:

athingbynatureprodigal:

argumate:

downzorz:

argumate:

argumate:

killing anyone is bad, but “soldiers” being people who have donned a specific costume and said “I will try to kill you while you try to kill me” are at least aware of the game being played and (conscripts aside) voluntarily choosing to play it.

killing civilians is obviously a crime and one of the reasons why war sucks so much ass, if you could simply lock the soldiers from both sides in a sealed chamber and let them sort it out between them then that would obviously be preferable!

deliberately attempting to kill a specific person (or the person standing next to them, or whoever had the misfortune to vaguely resemble them) is murder! like, a pretty basic crime! the stuff of hoodlums and gangsters! it doesn’t magically become okay just because it’s being done by gangsters paid by the government.

launching an aggressive war is still the ultimate crime that underlies so many other crimes, but assassinations are carried out even in “peacetime” too are that doesn’t make them any better.

anotherpersonhasclaimedthisus said: Killing people in “murdering me is ok” costumes is just as criminal as the people conspiring to commit mass murder in fancy dresses. Killing either group in self-defense is equally justifiable.

“self defense” is a stretch for most political assassinations though, much as the plane that hit the Pentagon was at least aiming at what could be described as a military target but the planes that hit the WTC were not, even if you could (and many did) construct a complicated rationale to justify them.

Political assassinations in peacetime maybe, but during a war it seems straightforwardly okay to target an opponent’s leadership structure?

in practice you rarely get the chance! the people calling the shots are usually well protected and any assassin typically has to be within their inner circle already, like the attempts against Hitler were made by other German officers.

It does seem weird to present it as a moral thing, though. “Nah, it would be wrong to kill Hitler, that would be murder. Killing the random 13-year-old who got press-ganged into Hitler’s army though? Go for it, that kid’s fair game.”

Seems like it’s one of the least-naturally-intuitive issues for people to understand: that whether or not someone’s important enough for you to have heard their name has zero bearing on the moral question of whether or not it’s okay for them to get killed.

I mean it is wrong in a technical sense to kill Hitler, but you would still do it if you judged this would lead to an immediate ceasefire and end the holocaust, wouldn’t you.

the question is whether you would assassinate Hitler in the 1920s on the grounds that he might be dangerous one day, or whether you’d assassinate some other rando who happens to be friends with Hitler on the grounds that this will strike a chill of fear into his heart or be symbolically important or some vague thing.

you’re hopefully not psyched to shoot at teenage conscripts but it can be difficult to avoid when they’re currently shooting at you, the question is how you justify blowing someone up on an average day who isn’t shooting at you.

If it would immediately bring about such a ceasefire, and even if it wouldn’t (unless maybe if it would somehow prolong the war or something?), then no, it wouldn’t be wrong to kill Hitler?

Something cannot be both immoral and “worth it”. If it is morally worth it, then it isn’t, in the context, immoral.

If anyone is currently having a genocide be done on their command, it is ok to kill them. (Exception: it was one more additional immoral thing for hitler to kill himself. The correct thing for him to do would have been to give himself up to the allies and allow himself to be executed for his crimes.)

argumate:

tilde-he:

argumate:

apriisr:

argumate:

I need to create a pentagonal tiling…

image
image

and done! perfect for all your hyperbolic bathroom needs

Huh, while every node here has 5 neighbors, and it seems like the balls around any two nodes are equivalent, this is still different than the hyperbolic tiling depicted above it. In the tiling of hyperbolic space with pentagons, each pentagon has 5 neighbors, and the dual, the places where the lines meet, each of those vertices have 4 neighbors, while, in the graph you’ve made, while each node has 5 neighbors, the dual graph is such that some of the dual regions have 4 neighbors while others have 3.

the tiling you’ve produced here seems to make a periodic tiling of the Euclidean plane, made of (depending on how you look at it) either irregular pentagons, or a combination of squares and triangles.

image

y'all gone hyperbolic now!!!

image

oh no

There we go! ( : ] )