djtaylor.me

Insert tagline here
BlogThoughtsProjectsResumePhysics

Monte carlo confidence intervals

Pub. 2021 Oct 22

This isn't a super well-thought out idea. Just getting that out there ahead of time.

In this post I'm going to go over an idea I had a couple weeks ago for generating confidence intervals in a monte carlo way. It is NOT based on the bootstrap technique. The intended purpose is for use in situations where you have some hidden parameter $p$, some observable quantity, and a simulated model for the random process that connects $p$ and the observation together. The goal here is to make an algorithm that is very cheap to run, easy to program, and adaptable to a wide range of situations.

This is going to be the first post in (what I believe will be) a two-part series. This first part will try to explain the idea using the problem of finding a confidence interval for the parameter in a binomial experiment.

The background, at a breakneck pace

The setup

The

The algorithm I'm going to be describing here will not be constructing full confidence intervals. It will instead be generating pointwise estimates. In other words, the algorithm will be trying to answer the question "is this particular value inside the 90% (or whatever) confidence region". You would then invoke the process again for many other potential values of the truth, and eventually get a dense enough set of yeses/nos as needed.

Just based on the previous sections' description of the problem, and definition of a confidence region, the most super naive first algorithm might look like this block of pseudocode:

#define N_PROB_ITERS (a lot)
#define N_CI_SAMPLES (a lot)

bool mcci_1(observation, truth, level) {
	prob_observation = 0;
	for (N_PROB_ITERS) {
		if (sample(truth) == observation) {
			prob_observation += 1.f / N_PROB_ITERS;
		}
	}

	list_of_sample_probs[N_CI_SAMPLES];
	for (i from 0 to N_CI_SAMPLES) {
		observation_2 = sample(truth)

		prob = 0;
		for (N_PROB_ITERS) {
			if (sample(truth) == observation_2) {
				prob += 1.f / N_PROB_ITERS;
			}
		}
		list_of_sample_probs[i] = prob;
	}

	sort list_of_sample_probs;
	discard bottom (1 - level) percentage of list_of_sample_probs;
	return (list_of_sample_probs[0] < prob_observation);
}

This is pretty garbage as it stands. With high enough N_PROB_ITERS and N_CI_SAMPLES, it will converge to the right answer, but very very slowly. This method is actually how I got my (approximate) ground truth 50% confidence interval [0.53, 0.75]. But to do that I had to use an N_CI_SAMPLES of like 1000 and an N_PROB_ITERS of 900, and that last one still wasn't anywhere near enough! I had to importance sample the probability calculations (more on that later) to get it anywhere close to the convergence.

[todo: image]

The main idea is that, if an observation obs is actually in the 50% confidence region, then $\frac{dP(obs_2)}{dM} < \frac{dP(obs)}{dM}$ exactly 50% of the time, for $obs_2 \sim P$. For the 70% confidence region, exactly 30% of the time. So, if you knew how to find probability densities exactly, you could make a map that looks something like this:

[todo: image]

You'd then take small circles around an area, like shown in the picture, and ask "does the ratio of blue dots to red dots exceed 50% (or 70% or whatever)?". If yes, then that point is in the 50% (or 70% or whatever) confidence region.

So now the algorithm is looking something like:

#define N_SAMPLES (a lot)

truth_vals[N_SAMPLES] = ; // Generate these however you want.
truth_results[N_SAMPLES];
for (i from 0 to N_SAMPLES) {
	prob_observation = true_probability(obs, truth_vals[i]);

	obs_2 = sample(truth_vals[i]);
	prob_obs_2 = true_probability(obs_2, truth_vals[i]);
	truth_results[i] = (prob_obs_2 < prob_observation);
}

bool mcci_2(truth, level, radius) {
	
	truth_subset = get_truths_indices_in_circle(truth, radius);
	
	ratio = 0;
	for (i from 0 to truth_subset.length) {
		ratio += (truth_results[i] ? 1 : 0);
	}
	ratio /= truth_subset.length;
	return (level <= ratio);
}

Much better.

This algorithm has the fairly useless feature that you can adjust the level parameter to see the 50%, 70%, 90%, 10% confidence regions without having to recalculate the probabilities. Which is okay I guess. Though I'd rather have a more stable, cheaper algorithm without this property.

So now it a picture looks something like this:

[todo: image]

But this one also has a problem. It very heavily relies on being able to write a true_probability(). You can certainly write something with good-enough accuracy.

Why is that? Why is the accuracy important? Why don't approximations work?

Take a gander at this picture:

[todo: image]

In this image, you have two numbers, a red 4 and a blue 5. Then you

The idea...

The nation of Fictopia is small and growing, and its economy has its ups and downs. As most small countries do, it rapidly industrialized, leading to many debilitating injuries. Also in an effort to drive down prices, deals were struck to make certain imports cheap, things that were previously made on home soil. Some sort of "social safety net" is needed.

But why would such safety nets ever be made and maintained? They'd have to be supported, at least to some extent, by the people who aren't down on their luck. Individuals have no real reason to support that kind of policy ("Just get a job!"), and so will oppose it. People with actual money will capitalize on that dissent and try to influence the actual policy makers. And if the safety net is "too good," the largest and richest job makers with also use their money and social leverage to influence policy makers, in an effort to not have to compete with the safety net (for example, someone deliberately sabotaging their job, getting some kind of benefits, and then the job makers would have to give better benefits to get those people back).

So let's bring the consequences home. Every once in a while, completely at random and with no warning, people will wake up to find all of their money and property confiscated. No money, no home, no transportation, any business or savings you own, all gone. If you do this often enough, to the point where everyone has (or is close with someone) who has had this happen to them, there's a good chance that individuals would support the safety nets. Because there's a good change it can happen to you too! And there's only so much time that a policy maker can do something that threatens their entire populace with destitution. So maybe making random acts of targeted poverty can force a society like Fictopia to have extensive, practical, and sustainable social safety nets.

(of course this isn't just about social safety nets)

...

In Fictopia there is certainly going to be some form of insurance floating around. There's no reason to believe that it won't be similar to America today, where insurance is essentially mandatory (renter's insurance, small business insurance, etc), or at least seen as a good idea (flooding insurance, etc). Which isn't necessarily bad.

In the countries surrounding Fictopia there are problems with with people being dissatisfied with their insurance providers, however. When disaster strikes, insurance companies bend over backwards to try to prevent paying up, citing strange fine print and questionable logic. When the amount and frequency of these disasters is very low, insurance companies can get away with practices like these for a very long time. These companies sometimes have a lot of resources (maybe by virtue of never paying out) to fight legal battles until the prosecutors decide they aren't worth it.

So again, what if you brought the problem home? Randomly and without warning... flood some buildings. Start some fires. Make insurance claims common enough that regular people experience it and remember the insurance company names, so the insurance companies can actually begin to compete on the services that they're actually supposed to supply.

(of course this isn't just about insurance companies)

...

Government in Fictopia isn't overly bureaucratic. Bureaucracy has a way of filling voids. But governments would have a hard time being filled with paper-pushers if, say... government buildings get burnt down without warning. And their computers wiped occasionally. If there's too much paperwork, forms, requests, etc to be able to regenerate while keeping the government running, and if these things happen often enough, there would just have to be less paperwork and paper-pushers, wouldn't there?

(of course, this isn't just about government bureaucracies)

...

Sometimes natural disasters happen in Fictopia. But it's okay, the community is ready. Hospitals are fully capable of running when the power grid goes out because... the power to the hospital just gets shut off sometimes. Sometimes days at a time, and with no warning. A power grid issue is just another Tuesday for them.

Sometimes food shortages are manufactured, and so the populace is more or less aware of how to deal with a food shortage with little to no casualties. Because they've all done it before.

A Katrina-like situation wouldn't necessarily be a walk in the park for the residents of Fictopia, but people would know how to survive the aftermath.

(of course, this isn't just about natural disaster preparedness)

... is bad

Of course it's a terrible idea. Let's go over some reasons.

The first big problem is the source of all of these manufactured disasters. There would have to be some "chaos engine": some source of completely random destruction that exists in the world. This person, or machine or whatever, would have to be capable of fucking with the rest of society in strange ways without detection until after the fact. How would that even work? It would require a system where the chaos engine would be able to mess with things (but no one else), and knowing how to do it (and preventing people from fixing those weak points).

This chaos engine, since it exists in the real world, would be susceptible to being manipulated. What if someone wanted bad things to happen more often to their enemies? Or if they wanted to be spared from all of the chaos, so they don't have to spend as much effort and time preparing for its destruction? This defeats the purpose, and would probably just make Fictopia overall worse at fostering the sorts of behaviors that its founders were trying to achieve.

Plus, it'll inevitably weakened over time. Some (possibly well-meaning) fool will eventually suggest something like "if Bob got a bad chaos roll yesterday, then he shouldn't again today," which will fuck the whole thing up. This idea really rests delicately on the assumption that the random chaos events are independent. Otherwise it could be gamed.

The examples I mentioned probably wouldn't work out the way I described. Take the insurance one, for example. It'll probably be observable whether or not the chaos is caused be the engine or by an actual accident. So what's to stop an insurance place from just saying "we'll insure against anything we can determine to be caused by the chaos engine, but all actual accidents are insured the same as they used to be"? The point of that example was that simulating minor individual disasters could work to expose the areas where businesses, purporting to provide a critical service, fail. See insurance companies, escrow companies, relief nonprofits, etc. If the simulations are too different from the actual disasters being prepared for (read: any different at all), the simulation becomes less than worthless.

Or take the first example. There were several points that it was trying to make, and one of them was about a way to potentially damper the effect of money on policy making. But this is absurd: in the absence of money as a source of power, something else will just take its place. Maybe status, maybe information, maybe contacts. Those with power will find some way to hang on to it. See the Soviet Union.

Here's another potential issue. Whatever distribution the chaos engine puts out, everyone will observe under a change of basis. So it's not the case anymore that everyone shoulders the issues, some will still shoulder it more. This has been exploited many times before. See current American race relations.

Remark

One final thing that I'd like to mention. Even though this particular idea is shit, I probably won't stop thinking about possible different models of how to run societies. Sometimes I hear something along the lines of "Our constitution and rules are fine (modulo some stuff), it's just that people are awful and messed it all up! They abused and considered themselves and friends to be above the rules. [Insert more stuff here]. We should just throw out all the corrupt people and we'd be good." This may be a strawman of what these people think, but I want to say something about it anyways.

Sets of rules beget dominant strategies. I think it's silly to suggest that, if we were to try to reset America and try again, that similar bad situations won't arise again in similar periods of time. Initial state -> strategies (which are a function of rules) -> final state. So either the initial state has to be changed (somehow), or the rules need to be changed. In the case of America, this is actually a good situation to be in, since our founding documents are fairly short. This means a much smaller surface area to start thinking about alterations.

I could probably ramble at length about specifics, but I doubt that discussion would be even half a lucid as this mess was.

Written by Daniel Taylor.
Email: contact@djtaylor.me

© 2024 by Daniel Taylor