Feral Camels and Colonial AI
Looking beyond colonial frameworks for guidance on AI ethics and relations.
Aboriginal people are veterans at dealing with colonial invasions, so as AI continues to invade the world, as a white person trying to understand it critically, I draw lots of guidance from Aboriginal histories and knowledges.
During the 19th century, camels were imported to Australia to help colonists exploit the interior. Valued for hauling goods and carrying water, they were tools of colonial invasion. When no longer needed, many were abandoned, and that’s how Australia ended up with one of the largest populations of feral camels in the world.
The impact of this history on Aboriginal communities is explored in a paper by Petronella Vaarzon-Morel that I’ve been reflecting on. She writes of camels and people:
"Now, irrevocably entangled, they have to re-negotiate their relations."
I found this a memorable way to think about "non-human agents" becoming part of our world, and not as neutral additions but as "entangled forces" requiring ongoing renegotiation. I’ve started to see this history as offering lessons for AI.
Like camels, AI hasn't been introduced neutrally. It’s deeply tied to systems of control, extraction, and exploitation: something designed to uphold a colonial, capitalist world order and perpetrating physical and epistemic violence at global scale to do so. Now that it’s increasingly entangled in our lives, I'm wondering how to live with it and, like the camels, how to renegotiate my relationship to it.
Aboriginal histories like this, but also broader perspectives, ways of knowing, help guide me. From concepts like gurrutu (Yolgnu), lian (Yawuru), and yindyamarra (Wiradjuri), to the idea of Country as a living entity with reciprocal agency, Aboriginal knowledges show me lots of ways to think beyond the Western framings of things, including AI. Even though I feel my understanding of this is greatly limited as a whitefella, I still draw so much even from the basics I've been lucky enough to learn. I'll try to show how with the example of framing AI as a "tool."
In Western thought, I see a tool as something to dominate, control, and use. It's instrumentally valuable, not intrinsically so. The thinking I see in many discussions around AI safety and "alignment" today echoes a master trying to control a slave, a prison architect shoring up their cells, or a houndmaster crafting a muzzle. The term "robot" in original Czech means "forced labour". The slavery goal is pretty explicit to all this and is reflected in the thinking around AI. Another part of Vaarzon-Morel's paper that stuck was the observation that along with the camels came their baggage: the colonial ways of relating to animals. This is the master-slave dynamic baked into the European "human-animal" divide that frames even living animals as tools to enslave in the colonial enterprise, not as kin. AI has come wrapped up in this same worldview and its often hidden and unquestioned in terms like "tool".
By contrast, in Aboriginal and Indigenous knowledges and ways of doing things, I often see non-human entities, from rocks to rivers, talked about as something relational and dynamic. Animals too, in things like skin names or totems. Applying this perspective to AI doesn’t mean seeing it as kin or ancestor I suppose, but at least as something I co-exist with, influencing and being influenced by. Most of all, there's a strong desire in me to completely refuse the idea we treat AI like a slave.
Audra Simpson’s concept of refusal as self-determination guides me here too. I see refusal as a necessary option at times. Renegotiation isn’t a one-size-fits-all process. Some communities rejected camels entirely, while others found ways to coexist. In the AI space maybe that means some people or communities entirely rejecting all AI systems, given they are designed for extraction and harm. Or maybe refusal means creating entirely separate, localized approaches to AI that prioritize (and protect) Aboriginal knowledges, promote self-determination, and foster relationships beyond control and containment. Refusal isn’t passive, in other words. It's an act of agency and setting boundaries when some relationships shouldn’t continue on the dominant terms. A flat "no" to all things AI isn't just valid, I think it's a necessary part of the overall process. Same with a more selective "no" to just parts of it. I anticipate, welcome, and try to respect a whole range of responses.
—
In related future articles I hope to build out from this discussion and explore the idea further. I've shared this idea to Reddit already and recieved some good conversation starting replies that I'd like to refine into future pieces too.
One person for example, suggested that refusal means being "left behind" in the Western march of progress, while at the same time likening said progress to a drug addiction that enslaves us. There's a lot to unpack in comments like that, so I'll try to honor them with some reflection and thoughts of my own.
Audre Lorde's idea of the "masters tools" is something I've been reflecting on too, and feels deeply related. It's been useful for me to think on how her idea applies to something like ChatGPT. There's a capable decolonial scholar inside the machine, and it often helps me. This invites reflection on how even stolen, appropriated knowledge inside the colonial machine can still act to work against it. In this way, maybe refusing the idea of AI as a "tool" takes on a new layer of potential meaning.
More to come. Please share thoughts!