IGCs are a way of introducing Yes or No Philosophy and Critical Fallibilism. I’m posting this seeking feedback. Does this make sense to you so far? Any objections? Questions? Doubts? Ideas that are confusing?
I marked a part I found unclear with a 🤔
Ideas cannot be judged in isolation. We must know an idea’s goal or purpose. What problem is it trying to solve? What is it for? And what is the context?
Makes sense.
Your ideas about the appropriate use of commas are useful for writing, editing and offering writing criticism, but not useful for dealing with a house fire. So you can’t judge your comma ideas in a way that applies independently of the context that they’re intended to deal with. If someone asks you for help with dealing with a fire and you say that commas are mandatory to set off a long introductory clause or phrase, and do so on the basis that that comma-related statement is a perfectly good and true idea, that would be unhelpful and unreasonable.
If you speak of an idea being good without stating a context, you have to have at least some implied context in mind (like it’s useful for solving certain problems, it meets certain criteria of elegance you have for an idea in some field, whatever). Talking in a loose manner like that is okay if you understand what’s really going on.
So we should judge IGCs: {idea, goal, context} triples.
The same idea, “run fast”, can succeed in one context (a foot race) and fail in another context (a pie eating contest). And it can succeed at one goal (win the race) but fail at another goal (lose the race to avoid attention).
Makes sense. I thought the examples were concise and effective.
I think that with my example earlier, the house fire would be a context and a desire to put out the fire would be a goal.
Think in terms of evaluating IGCs not ideas.
A core question in thinking is: Does this idea succeed at this goal in this context? If you change any one of those parts (idea, goal or context) then it’s a different question and you may get a different result.
There are patterns in IGC evaluations. Some ideas succeed at many similar goals in a wide variety of contexts.
Yeah. Like the general idea of being organized succeeds for various goals in cooking, programming, and house tidiness.
Good ideas usually succeed at broad categories of goals and are robust enough to work in a fairly big category of contexts. However, a narrow, fragile idea can be valuable sometimes. (Narrow means that the idea applies to a small range of goals, and fragile means that many small adjustments to the context would cause the idea to fail.)
There are infinitely many logically possible goals and contexts. Every idea is in infinitely many IGCs that don’t work. Every idea, no matter how good, can be misused – trying to use it for a goal it can’t accomplish or in a context where it will fail.
Whether there are some universal ideas (like arithmetic) that can work in all contexts is an open question.
🤔 The IGC concept seems to treat contexts in a more precise way than people might typically talk about that concept. Given that, I am not sure what it would mean for an idea to work in all contexts. I think I’m getting tripped up on “all” specifically. Some ideas seem like they’d be irrelevant for some contexts, so I’m not sure what it would mean for an idea to work in a context for which it is irrelevant. Is there an implied qualifier on “all” that’s something like “for which the idea is relevant”? That’s one guess as to what Elliot might mean. I may have a big misunderstanding of what Elliot means by contexts, though. Not sure.
Regardless, all ideas fail at many goals. And there are many more ways to be wrong than right. Out of all possible IGCs, most won’t work. Totally random or arbitrary IGCs are very unlikely to work (approximately a zero percent chance of working).
Right. That’d be kind of like a creature with a totally random genetic code being well-adapted to some niche – very unlikely to happen.
Truth is IGC success – the idea works at its purpose. Falsehood or error is an IGC that won’t work. Knowledge means learning about which IGCs work, and why, and the patterns of IGC success and failure.
So far, this is not really controversial. IGCs are not a standard way of explaining these issues, but they’re reasonably compatible with many common views. Many people would be able to present their beliefs using IGC terminology without changing their beliefs. I’ve talked about IGCs because they’re more precise than most alternatives and make it easier to understand my main point.
People believe that we can evaluate both whether an idea succeeds at a goal (in a context) and how well it does. There’s binary success or failure and also degree of success.
Right. Like for the context of someone living in a Western country, someone could judge whether an idea succeeds at the “getting rich” goal. But then if the idea (say a business idea) does really well, they might say it really succeeded at the getting rich goal, as opposed to merely succeeded. So that would be a degree thing.
On the one hand I think that it can be reasonable to have a range of outcomes that you treat as a broad category of “succeeding at goal”. There doesn’t seem to be anything inherently wrong with that to me.
On the other hand, it seems like you could think of things in terms of multiple different goals, e.g. “get rich” and “get really rich”. And even if you didn’t know the exact point where the cutoff was between those two, you might know what definitely counts as one but not the other. So you might know that making $1 million counts as “rich” but not “really rich”, and that $10 million counts as “really rich” and not merely “rich”. So you could break down the “degrees” into different goals if you wanted to.
Therefore, it’s believed, we should reject ideas that will fail and then, among the many that can succeed, choose an idea that will bring a high degree of success and/or a high probability of success.
I claim that this is approach is fundamentally wrong. We can and should use only decisive, binary judgments of success or failure.
The main cause of degree evaluations of ideas is vagueness, especially vague goals.
My guess is that Elliot will address issues like the one I bring up with my rich versus really rich example when he talks about vagueness.
I think this is answered (in yes/no) with adding goals + breakpoints.
If your goal was to be rich and you had 2 non-refuted plans then you need to do more work to find out which one is better. Particularly: you need to introduce more goals and decide on breakpoints along the “rich” scale (presumably net-worth or something). Using fuzzy breakpoints is okay until you need more specific ones.
One possible goal to add is being mega-rich, like a net worth of $100m+ or $1b+ something (those two figures are new breakpoints). Hopefully one of your plans is good enough to meet that standard — if only one plan is then you know what plan to pick.
Another goal might be low-risk, so getting to $10m in a longer-term safer way is better than doing a startup or something. We introduced another (rather fuzzy) breakpoint on the “risk” scale.
We might be able to add extra goals without more breakpoints, too, and perhaps we can tweak a goal and introduce more breakpoints. Both of those methods can get a list of 2 candidate IGCs down to 1, but don’t work all the time.
Max:
Yeah. One goal I thought of was something like “have time to do stuff that isn’t making-money related”. That’s a big additional constraint that might cause you to reject some of the plans where you make more money but don’t have time.
A lot of people approach their career choices that way. There are plenty of people who don’t just try to maximize money-making by doing a startup or being an investment banker or whatever. They try to have a certain amount of money and other stuff too (like time for their family or being able to travel or having summers off or whatever).
i replied at http://curi.us/2387-igcs#4