Currently, I’m working on a dissertation on underdeterministic causation at the University of Pittsburgh, at the history of philosophy department. Before (and meanwhile), I got a few other degrees. I explain this confusing trajectory below. First thought, more on my research.
Most of my attention has recently been spent on investigating underdeterministic causal phenomena. Take an event that elevates the modal status of another event. For instance, if Breton hadn’t met Vaché, surrealism wouldn’t have come to be; if he had, it might have. He did. Surrealism began. Therefore, Breton meeting Vaché was a cause (or a causal background condition, if you prefer) of the advent of surrealism. These two counterfactual dependencies suffice to claim a causal relationship between the meeting and the movement. The relationship exemplifies a causal concept, which current theories of causation cannot account for: A new theory is needed: an underdeterministic cause elevate the modal status of its effects. And there’s more. Underdeterministic token causation is a member of an entire family of underdeterministic concepts: type causation, counterfactuals, causal modals, independence, underdeterministic structural equations. Read about them here.
The causal-causal theory
While developing the theory of underdeterministic token causation, I wanted to build it off the most successful deterministic theory; after all, deterministic causes are edge cases of underdeterministic causes, for they also elevate the modal status of their effects: from impossible to necessary. When I first encountered the structural equations literature on deterministic causation, I saw a project steadily progressing: the next new theory handled more cases, and although it also prompted new counterexamples, it successor would handle these too. And repeat. This clearly—or so I thought—was a flourishing research program. It’s built on two assumptions: that causation directly translates into a counterfactual dependence between the cause and the effect, and that this dependence is constrained by how normal the events involved are.
Now, however, I can’t but see epicycles upon epicycles. Consider Weslake’s partial theory of causation. The theory greatly improves on its predecessor by Halpern and Pearl, but the improvement is achieved by combining conditions that are seem independent; moreover, one of the conditions needs to be applied to every node along some path from the cause-node to the effect-node. The theory works, to the extent that it does, because it recovers the recursive structure of the causal judgment: a cause causes a distant effect in virtue of causing one of the effect’s direct causes. Add a constrain on asymmetry, and you captured the structure. The resulting theory, dubbed the causal-causal theory, is here. It’s not intended to handle cases that involve prevention, though it does handle some such cases. It also breaks down for some voting scenarios. Still, it seems to work better—i.e., account for more cases—than any previous deterministic theory. The next step is to deal with prevention and voting scenarios.
Typically, prevention cases have been handled with appeals to normality. However, I don’t think this works. Here, I argue that there are cases that such appeals mishandle, even though the cases involve the very same intuitions that normality was supposed account for. So, however I fix the causal-causal theory, the solution won’t appeal to normality.
|2022||(expected) Ph.D., University of Pittsburgh, history and philosophy of science|
|2022||(expected) M.A., University of Pittsburgh, philosophy|
|2022||(expected) CMU/University of Pittsburgh, graduate certificate in neuroscience|
|2018||Ph.D., Wrocław University of Economics, economics|
|2016||M.A., Washington University in St. Louis, philosophy-neuroscience-psychology|
|2011||fellowship at Rutgers University|
|2011||B.A./M.A., University of Wrocław, philosophy|
|2010||B.Sc./M.Sc., University of Wrocław, computer science|
|2008||B.A./M.A., Wrocław University of Economics, management|
And the explanans:
It is a truth universally acknowledged—or at least it was in Poland at the time I was applying to college—that philosophers starve and programmers don’t. So, despite my heart-felt intention to study philosophy, I enrolled in computer sciences studies. (A piece of background: in Poland, you apply to a particular major, which then you can’t switch, and you typically study it for five years and graduate with a master’s.). A year later, I realized I had enough energy to enroll in another major. Yet, since managing programmers makes the prospect of starvation even less likely, I was persuaded to study management. Fast forward two years. It became clear the desire to do philosophy wasn’t a phase I would grow out of or placate with an elective. The philosophy department luckily offered weekend studies—a weekend of classes twice a month. I enrolled. What may seem like lots of work was more like vacations: every other weekend I would forget about the outside world for two days—everyday life would give way to Plato.
Eventually, I graduated with three master’s degrees: management in 2008, computer science in 2010 (it took me some time to finish the thesis), and philosophy in 2011. But I don’t want to imply I did the first two for practical reasons, and only the last reveals my true preferences. That might have been how it started. But in computer science, I took classes in logic, abstract algebra, computational theory, modal logic, and elements of formal semantics; in writing my thesis, I used skills from all these classes. And understating Turing’s proof that there are problems no machine can solve is one of the most philosophically shaking experiences out there. In my management studies, I quickly realized that macro- and microeconomics and econometrics is where all the fun is. I redesigned my studies (thankfully, the university allowed for that if you found a professor who’d OK your plan) to focus on these subjects. Once I graduated, I enrolled in a Ph.D. program in economics. I wrote my thesis on economics of education (for knowledge is the highest good, I read somewhere), and I got a National Science Center grant to fund my empirical research. After what happened next, I put the research on the back burner for some time. Still, in 2018, I defended the dissertation; I write more about this research here.
In my fourth year of studying philosophy, I googled Bacon’s experimentum crucis while preparing for an exam. Experimental philosophy popped up in search results. Empirical research, but in philosophy—what’s there not to love? I organized an undergraduate interest group; soon enough, we ran our own experiments—the first xphi studies ever conducted in Poland. One of them married philosophy and economics. We put some people behind (our best approximation of) the veil of ignorance and asked them to discuss and decide on a distribution rule of unknown future payoffs. Our subject weren’t very Rawlsian, to be honest, but the results were interesting enough to submit an abstract to a conference in Japan. It got accepted. Months later, I attended my first international academic conference. There, Stephen Stich saw our presentation and suggested I should apply to graduate school in the U.S., eventually sponsoring a year-long visit in 2011 so I could audit classes and get recommendation letters.
That’s about that. I got into the philosophy-psychology-neuroscience program at Washington University; after three years, with a master’s, I transferred to Pitt HPS. There and here, I took some more classes than the curriculum demanded; I expect to graduate with a Ph.D. in history and philosophy of science, master’s in philosophy from the philosophy department, and a graduate certificate form the Center for Neural Basis of Cognition.