Reconceptualizing the Media thing!


With the advent of optical fiber networks, not only people’s perceptions have become wired and chained to a digital base, but also their knowledge has escaped the endless loop. “With numbers,” as Friedrich Kittler wrote, “everything goes” (2). From modulation to synchronization, technological discourse has become manipulable in the hands of the Silicon Valley. Images, sound, words and all sorts of data flows are disappearing into black holes—of sound waves and light—and succumbing to nameless high commands. In fact, the new technology has not only subverted writing, but also has rendered it obsolete.

In a postprint space, the printed word has taken a back seat and new media technologies have become the novel ways of communicating and storing data. Obviously, the new technology involves the production, circulation, and consumption of texts. Previously, all data had to pass through the dactylo’s needle or the pen head of the written signifier. It was simpler, at least from a presentist approach, and the task demands expressionism and self-refrentiality, but the latter had to be recast in light of the new media technologies.

You may ask, when did “typing” become “keyboarding”?  

Indeed, the use of the typewriter has changed the philosophy of writing from that of a unique expression of a literate individual to that of “a sequence of naked material signifiers”. Other technologies, like the sound recording and filmstrip, have reframed our language of perception. In doing so, the technologies constructed their users and defined their behaviors. We may occasionally like to drift into a reverie of seemingly simpler times, rewriting history through an assemblage of different pieces. That is, isolating and forcing texts to reveal something beyond the materialities and orders of communication, but going back is never an answer, even if it were possible. In other words, we are left only with musings, that is to say, with stories, since “Media”, as Kittler put it, has become the new dharma that “determines our situation”. Nevertheless, media have their own “biases” and “messages” and that must be taken into account.

In this approach Kittler builds on Marshall McLuhan’s emphasis on “mediality,” which enframes the way we think about the media of the past and that of the future. That being said, with the growth of social media networks anyone could create a mediality to others based on their generated content. For example, the one who only consumes sports and local news media has different opinions, and thus “mediality”, to the one who consumes a specific entertainment and international news. Kittler, in this regard, emphasizes the role of media technologies in structuring ‘human affairs’. In Kittler’s usage, “discourse network”, in this case social media designates the “network of technologies and institutions that allow a given culture to select, store, and produce relevant data.” (xxiii) This means, once again, that it is the power of particular technologies to shape the content of communication that is at issue, rather than the ability of humans to produce, use or manipulate these technologies.


What is the Essence of Technology?

Image“The more questionably we ponder the essence of technology, the more mysterious the essence of art becomes.” That’s how Martin Heidegger closed his ontological analysis on the “Question Concerning Technology”. Perhaps, such enigmatic position comes from an emphasis on the essence of technology as a vehicle through which the question of being can be unfolded. In fact, its ontological reality demands thorough analysis of what technology is. How do we examine technology? How do we explain it? Is it a phenomenon? Is it a tool or an end per se? Is it a man-made mean to serve an end established by man?

 Clearly, Heidegger disassembles the word technology and projects it rather as phenomenological as “letting what shows itself to be seen from itself, just as it shows from itself.” In other words, Heidegger’s theory of technology is based on an ontological understanding of being. He examines the structural features of technology as an entity, of what makes it a “being”, its practices and how it correlates with human activity. It is an instrument and an idea at the same time. The whole complex of these contrivances is technology.

 Yet, the anthropological and instrumental definition of technology still does not show its essence. Heidegger went even farther to claim that technology has fourfold causality—the material, the form, the end, and the effect of the finished—all belonging at once to each other, of being responsible for bringing something into appearance. Again one asks: does such causality happen beyond human doing? The answer is no, but neither does it happen exclusively in man. There is a correspondence between the two. In this frame, technology, as the Greek stem technê explains, is “the name not only for the activities and skills of the craftsman but also for the arts of the mind and the fine arts.” (Heidegger, 318) So, if we speak of the “essence” of technology we speak of a process, of different ways through which it is computed, administered, and developed.

 If the “essence” of technology is a goal-oriented activity, or a powerful tool to increase production “coupled with scientific and technical progress,” as Habermas argued, its reality remains intricate as it relies on computational ideas. These ideas, as Brian Cantwell-Smith argued, “is a little like skating on thin ice: all goes well so long as you keep moving. Stop and probe, and cracks open up—the gloss of solidity is lost.” (“Age of Significance”, 3) But, how can it take million of lines of code to implement very simple applications in any machine?

It is not evident that these functions could succumb to scientific analysis at all, or whether could be reduced to a human activity, making it more amenable to study from the social sciences, humanities—maybe even the arts.

But still, could these cracks be healed with education and time? Although it is not clear that these two could define the “essence” of technology, Habermas argues that behind the modern developmental process there lies a structure of rationality that is realized in specific forms privileged by the dominant society. This does not challenge Weber’s account of technical rationalization, as Habermas appears to identify it with its specifically capitalist forms. Cantwell-Smith, however, argues that “whatever the reason, the central notions of one of society’s most widely heralded developments (computing) have remained remarkably conceptually untheorised.” (Smith, 4)

Logic & the Design of Digital Computers


From time immemorial, logic has been at the back of several facets of knowledge—humanities, science, technology, engineering and math. But wait! What is “logic”? Is there a way to measure the extent of our logical thinking? Are there principles of valid logic? If by the latter one means coherence, deductive inferences and syllogism, several philosophers and mathematicians, from Aristotle, von Neumann, to al-Khwārizmī, to Gottfried Leibniz, to Alan Turing, used such concept to decipher and manipulate the symbols of a mechanical language of thought. But, “logic”, as a principle of arithmetic, could also mean the construction of large edifices from small bricks (divided categories) using a binary language which consists of just two digits 0 and 1. But it seems, with the breakneck rapidity of computing and mathematical language, the role of logicians remains questionable.

Regardless, the use of logical concepts has been central not only in answering big questions and regulating daily matters, but also in designing and creating new technology like the map, the printer, digital computers, etc. It would suffice to refresh one’s memory of high school math drills and a number of nostalgic moments would be recalled. I was taught to equate people’s intelligence with solving math problems. The math-nerd your are, the better logician you become! It went on to frame my perception of assessing people’s logical thinking and computational literacy.  At that stage, one would have assumed that there is a strong connection not only between logic and math, but also between logic and modern digital machines.

Tracing the intellectual lineage of digital computers is not as recent as one might think. In fact, as Martin Davis argues, computers are in many ways the culmination of the glorious and powerful mathematical tradition we now call logic. Nevertheless, Davis traces biographical sketches of the lives of several contributors, a la Shakespearian tragedy, where each suffered an ill-fated dream. While logicians, such as Leibniz, grasped the broader significance of systematic logic and mechanizing calculation in fixing difficult problems and completing Aristotle’s project of codifying syllogisms, his doctrine of optimism of creating an encyclopedic compilation and reducing human reasoning to a purely mechanical and symbolic task, lingered myopic.

That being said, their conceptual marvel of logic and how it laid the ground for the design of modern technology is impeccable. Take the computer, for example and how it can perform so many different things simultaneously. It is remarkable that my ingenuous gadget on which I am typing this blog is equally adept at generating solutions to partial differential equations—the various input we enter and the kind of output we expect to obtain. So, the ultimate test of a theory of logic that aims at completeness is whether it encompasses all mathematical reasoning is relative and finite. But, at the end one still wrestles with the “engineers” vs. “logicians” dialectic or “who shaves the barber?” This dilemma demonstrates the existence of infinities that are higher than the infinity of the integers.

What is Digital Humanities?


As a historian and a Tech-nerd, I have always been intrigued by the concept of “Digital Humanities” (D.H), what could it mean? Is it about making a text visible on the web? Or is it about data processing? Is it simply interdisciplinary? To what extent could D.H impact the creation and dissemination of knowledge? Also, what could it bring to the historiographical problem given the partisanship and selectivity of digital archives? These questions, for historians and humanists in general, remain a continuing process to be explored and refurbished, rather than seeking ready-made answers. Indeed, there has been a challenging scholarly debate as to what is DH and what is not. John Unsworth, one of the pivotal figures in the terrain, defines DH or “Humanities Computing”, as the use of “computational tool for modeling humanities data.” (Unsworth, 1) While this approach is a slice encapsulation of what is DH, Davis and his co-authors went even far to describe “Humanities Computing” as “a practice of representation, a form of modeling or […] mimicry. It is […] a way of reasoning and a set of ontological commitments, and it’s representational practice is shaped by the need for efficient computation on the one hand, and for human communication on the other.” (Davis et al., 1) This position might produce certain logic to make a text visible on the web, but others like Amanda French, still finds it difficult to define the term while stating: “I am sick of defining it…”.

As it may seem, DH is not a unifying field, it is rhetorically contested and several scholars have lobbied for one definition over the other. But what is crucial about it, is the way in which it has become an integral part of the academic discourse. It has also unearthed the question of the democratization of culture and scholarship and to a certain degree it has collapsed the boundaries between humanities and other disciplines. In other words, it is about looking backward to move forward acting in new ways. Lunenfeld et al., in this frame, argue that DH “promotes collaboration and creation across domains and expertise […] leverage[s] the increasingly distributed nature of expertise and knowledge and transform[s] this reality into occasions for scholarly innovation, disciplinary cross-fertilization, and the democratization of knowledge.” (Lunenfled, 4) Knowledge production and reproduction are no longer limited to a single authorship and intellectual policing, but rather a truly ubiquitous and multilingual authorship who create, manage and share information on a common platform. Several authors can share the same interface to comment on on each other’s research, receive updates, identify certain needs, exchange in-process queries and bring a collaboratively crafted work.

That being said, DH, through the peer-to-peer review, reasserts and reinterprets the value of knowledge of humanities. As Lunenfled argues it is a new “umbrella under which to group both people and projects seeking to reshape and reinvigorate contemporary arts and humanities practices, and expand their boundaries.” (Lunenfled et al., 13) Differently put, it is no longer associated with the number of publications but instead based on the reception and feedback of the community of readers. Following the same line of thought, Willard McCarty states that DH, as a new invention, “has offered” scholars “new liberties of action, that old constraints have been removed, that their political will, or their sheer greed, are no longer frustrated, and that they can act in new ways.” (McCarty,1224) These liberties will continue to shape the future of DH as new opportunities for scholars will open up to share and render their works ubiquitous. But still, could one be a digital humanist if one doesn’t know how to code? Is DH a question of coding? These questions are speculative and lie beyond the scope of present knowledge.


Two Years on: Has the Tunisian Uprising Plagiarized the French Revolution? – See more at:

Two Years on: Has the Tunisian Uprising Plagiarized the French Revolution? – See more at: