We are recently witnessing an explosion of available
data from the Web, government authorities, scientific
databases, sensors and more. Such datasets could benefit
from the introduction of rule sets encoding commonly
accepted rules or facts, application- or domainspecific
rules, commonsense knowledge etc. This raises
the question of whether, how, and to what extent knowledge
representation methods are capable of handling the
vast amounts of data for these applications. In this paper,
we consider non-monotonic reasoning, which has
traditionally focused on rich knowledge structures. In
particular, we consider defeasible logic, and analyze
how parallelization, using the MapReduce framework,
can be used to reason with defeasible rules over huge
data sets. Our experimental results demonstrate that defeasible
reasoning with billions of data is performant,
and has the potential to scale to trillions of facts.