For the same reason that hiring is hard, deciding whom to let hire you is equally hard. In fact, to make an analogy to complexity theory, I think the complexity class of deciding whom to let hire you is a superset of the complexity class of deciding whom to hire. It has the same hardness properties of hiring: deciding what kind of people you want to work with, and then selecting a set of people that are those kind of people. Then it adds more complexity: deciding what kind of leadership chain you want to work with, what kind of domain you're interested in working in, what level of compensation you're willing to accept, what kind of workplace culture you find appealing, and so on and so forth.

But we also have the (difficult!) task of evaluating if a given opportunity matches our criteria. Both of these tasks are superbly difficult.

```
Let factors be list of attributes that a job has, e.g. comp or people
Let their weights be a value between 0 and 1 that determines their
contribution to the jobs' desirability.
```

Lots of people go job seeking without even knowing what their factors are, let
alone their weights. Most people end up defaulting to a revealed preference of
`factors={compensation=1.0}`

. I think this is a false revealed preference, not a
true one, borne by a lack of deliberateness. At the least, it is certainly true
for me that in the past, I over-indexed on compensation as a factor due to
underdeliberating.

It is important that this step comes early in the process, because your factors and their weights are an input to a) who you accepts interviews with, b) how you conduct your reverse interviews, and c) what data points you're looking for.

```
Let each role have an evaluation score for each factor, between 0 and 1.
```

I wish I had a robust framework on how to evaluate your factors. This is one of the hardest parts. From your factors and their weights, try to derive a strategy for reverse interviewing during your loop. Try to find good questions and ask them to get data points. Try to find other means of gathering data on your factors, out-of-band of the interviews. Contact employees that recently left the company, read forums, ask friends. Read the company's launch statements and blogs.

```
Let epsilon be error for your factors' scores and their weights.
Let the lower bound desirability of each role be the sum of
factor.weight * score, less epsilon for both terms.
Let the upper bound desirability for each role be computed the same way,
rather adding epsilon for both terms.
```

After caclulating your weighted scores, each role should have a score, and there should be a cluster of overlapping high-scoring roles at the top, given the epsilon error bounds.

Not to get too caught up in false rigor, this process isn't meant to be the decider itself. It's just a lightweight framework to help you craft a process that gathers better data on roles you are considering, and a possibly meaningful output in terms of a score for each role at the end.

Most likely, the score won't be able to fully capture how you think and feel about each role.

The rest is intuition and feel.