Jumat, 21 Oktober 2016

Pohon duit

Mau dapat cash gratis setiap hari? Bisa dapat Rp 50.000 perbulan loh~! Caranya mudah, download aja Cashtree android app di link referral ini! Dan dapatkan langsung Rp 1.000, Bagi kalian yang beruntung bisa mendapatkan 1 Juta Pulsa dari Lucky Chance!! Flash Cash hari ini ! 22 OCT 19:?? WIB [Rp 2.664] https://invite.cashtree.id/347ebb

Rabu, 19 Oktober 2016

Pohon duit

Mau dapat cash gratis setiap hari? Bisa dapat Rp 50.000 perbulan loh~! Caranya mudah, download aja Cashtree android app di link referral ini! Dan dapatkan langsung Rp 1.000, Bagi kalian yang beruntung bisa mendapatkan 1 Juta Pulsa dari Lucky Chance!! Flash Cash hari ini ! 20 OCT 12:?? WIB [Rp 224] https://invite.cashtree.id/347ebb

Rabu, 05 Oktober 2016

History ISO 20000


I was interested in the origins of ITIL and ISO 20000, and began looking it up on the internet and in literature. There are bits and pieces of info here and there, rumors and info of questionable quality on forums and blogs. I have managed to collect this information and put it in a table in an organized fashion for the convenience of anyone who might share the interest.

Since some of the sources are not 100% verified, please feel free to comment if you have an update or complementary info.

Some myths (Falkland war origins of ITIL) are funny and believable, but denied by the authors.

The role of IBM and its Information Systems Management Architecture (ISMA) is pretty vague; even the stakeholders from that time do not agree on the degree to which it influenced ITIL. I am ready to believe that ISMA played a significant role in the forming of ITIL V1, but that vehicle quickly took the road and went many miles from its origins.

ITIL_timeline

The late Margaret Thatcher was first elected Prime Minister in 1979. The fact was that the UK Government’s IT budget was extremely large and difficult to control, and it is quite reasonable to think that ITIL development was related to cost-cutting, efficiency-driven policy of that time.

As for ISO 20000, information on its history is even scarcer. I thought it would be fun to see major milestones and deliverables side by side in a single table:

ITILISO/IEC20000
1986CCTA (UK Government’s Central Computer and Telecommunications Agency) authorized a program to develop a common set of operational guidance with the objective of increasing efficiencies in Government IT.
1988GITMM – Government Infrastructure Management Method was formalized and issued as guidelines for Government IT operations in the UK focused on Service Level Management. Also in 1988, the development team was expanded and work continued on Cost, Capacity, and Availability.
1989GITMM title proved inadequate. It is not a method, and the “G” was making it unmarketable outside of government. Finally, it received a new name: IT Infrastructure Library – ITIL.
1989First ITIL book published, Service Level Management, Help Desk (incorporating the basic concepts of Incident Management), Contingency Planning, and Change Management.
1990Problem Management, Configuration Management and Cost Management for IT Services published.
1991Software Control & Distribution published.
CCTA initiated IT Infrastructure Management Forum as a formal user group.
1992Availability Management published by CCTA.
1993Examination Institute for Information Science (EXIN) established in Netherlands to deliver and administer the ITIL examination.
1995British Standard Institution (BSI) published the first version of DISC PD 0005:1995 – Code of Practice for IT Service Management. It described four basic ITSM processes.
1996(July) First ITIL Service Manager class delivered in United States by US company, ITSMI.
1997Customer-focused update to the Service Level Management book.
1997ITIMF legally becomes what we know today as the IT Service Management Forum (itSMF UK).
1998BSI publishes a revised version of DISC PD 0005:1998 and it already described all five process areas and 13 processes as we know them today.
2000Service Support V2 publishedPublished BS 15000:2000 – Specification for IT Service Management, which was used together with the code of practice DISC PD 0005.
The third supplementary document DISC PD 0015:2000 IT Service Management Self-Assessment Workbook was published.  It was a questionnaire used to make an assessment of compliance degree with BS 15000.
2001Service Delivery V2 published.
CCTA became a part of the Office of Government Commerce (OGC).
Microsoft released the Microsoft Operational Framework (MOF) based on ITIL.
2002Application Management, Planning to Implement IT Service Management and ICT Infrastructure Management published.Some revisions and rewriting followed, resulting in standard documents very similar to today’s norm:
BS 15000-1:2002 IT service management – Specification for Service Management;
PD 0015:2002 – Self-Assessment Workbook
2003Software Asset Management published.
British Computer Society’s ISEB starts ITIL Practitioner trainings and examinations.BS 15000-2:2003 IT service management – Code of Practice for Service Management published.
PD 0005:2003 Guide to Management of IT Service Management published with explanations of the purpose of BS 15000, and also the framework guidance on how to use the standard processes and implement them.
2004Business Perspective: The IS View on Delivering Services to the Business published.BS 15000 was adopted by many service companies in UK, and countries worldwide accepted it.
2005BS 15000 was placed on the “fast track” by the ISO. By the end of the year, with some moderate changes, it was published as ISO/IEC 20000 standard:
ISO/IEC 20000-1:2005 Specification  is very formal, it defines processes and provides assessment/ audit criteria.
ISO/IEC 20000-2:2005 Code of Practice with HOW-TOs and best practices for implementation of Part 1.
2006(June) ITIL Glossary V2 published
APM Group Limited announced as preferred bidder of ITIL accreditation & certification program, over the itSMF International, which was expected to win.
2007(May) ITIL V3 five core books published.
2009ISO/IEC TR 20000-3:2009 Guidance on scope definition and applicability published.
2010Peoplecert Group, the new ITIL Examination, Institute accredited.ISO/IEC TR 20000-4:2010 Process reference model – describes the service management system processes implied by ISO/IEC 20000-1 at an abstract level.
ISO/IEC TR 20000-5:2010 Exemplar implementation plan for ISO/IEC 20000 -1 – provided guidance to implementation of ISO/IEC 20000 by example and advice.
2011(July) ITIL 2011 update published.(April)  ISO/IEC 20000-1:2011 – new version of specification is out.
2012(February) ISO/IEC 20000-2:2012 – new Guidance on the application of service management systems published.
NowWork in progress: ISO/IEC 20000-7: Application of ISO/IEC 20000-1 to the cloud. ISO/IEC 20000-10: Concepts and terminology for ISO/IEC 20000-1. ISO/IEC 20000-11: Guidance on the relationship between ISO/IEC 20000-1 and related frameworks.

To better understand ISO 20000, you can also find  Service Management System-Related documents as a free preview

ISO 20000 KNOWLEDGE BASE

may bring a structured, easy-to-understand and easy-to-implement framework, but it also uses clearly defined terminology that enables ITIL experts around the world to understand each other. As in any other field of interest, ITIL terminology also uses terms that may differ from their usual meanings in the English language, while others may be self explanatory.

Within this article, some common, and important, ITIL terms will be explained.

Service

What better way to start with ITIL terminology, than with an explanation of service. Service is anything that you have to do, in order to make deliverables valuable to the customer. For example, e-mail as a service is what customers will use to exchange messages, files, documents, etc. However, the customer will not be aware of, or interested in, the technology behind it, or how it’s set up, as long as it can be used reliably – it’s considered valuable. You can read more about IT Service Management here.

Service Catalog and Service Portfolio

Often, there is confusion between Service Catalogue and Service Portfolio, as both contain lists of services and corresponding SLAs. However, while Service Catalogue contains only a list of active services that are available to the customers, Service Portfolio contains a list of all services: active, active but not available (e.g. legacy systems that are still used by few end users), services in development or planning, and services that have been shut down.

Service Level Agreement (SLA) and Operational Level Agreement (OLA)

Service Level Agreement is document signed between customer and service provider in which service and guaranteed levels of service delivery are described. For example, a SLA would include the time frame in which service will be available, metrics that will be measured in order to provide the expected level of service, etc. Operational Level Agreement, on the other hand, is an agreement between organizational teams that are included in service delivery. There might be several teams involved in service delivery (server, networking, application…), which all must work together in order to meet the requirements stated in the SLA.

Utility and Warranty

Utility and Warranty are like two sides of the same coin, and without them service has no meaning. Utility is commonly referred to as “fit for purpose,” and Warranty as “fit for use.” You can imagine Utility as a service’s ability to do what it was built to do (e.g. an e-mail service must be able to send and receive e-mails). Warranty, on the other hand, is a service’s ability to be usable by the customer in the expected and agreed way. For example, if you have an e-mail service that is used for sending and receiving e-mails (Utility), but it can’t handle more than two concurrent emails, or every 100th email gets lost, than that service’s Warranty is failing.

Incident, Problem and Known Error

An Incident is an event that caused degradation or disruption of a service (e.g. e-mail is not working). Read more about ITIL Incident Management.

A Problem, on the other hand, is the root cause of the incident, or many incidents that repeat with the same root cause (e.g. e-mail is not working, file sharing is not working, users can’t print, all due to a network outage – all at the same time, generating lots of related incidents). A good example of the difference between ITIL’s definitions of Incident and Problem are the famous words from Apollo 13: “Houston, we have a problem.” However, according to ITIL, the correct terminology would be “Houston, we have an incident.”

A Known Error is an incident or a problem whose root cause has been identified and some sort of Workaround has been implemented. It will remain a known error until change is applied that will correct the root cause permanently, and then it is referred to as a Solution.

Service Desk

Service Desk is a single point of contact between end users and service provider. It’s used to receive and resolve incidents, service requests, requests for information, and to communicate and coordinate communication, between service provider and end users (customers). However, people commonly think of Call Center, Help Desk and Service Desk as synonyms, which is not the case – as a Call Center doesn’t attempt to solve incidents, and a Help Desk doesn’t handle service requests. More information about Service Desk is available on this blog post: Service Desk: Single point of contact.

Service Request

Service Request is another ITIL term, which is used for requests for new services or alteration of existing ones. For example, an end user may ask for internet access (new service), or increase of mailbox size (alteration). In general, Request For Information is considered to be part of Service Request, and so is Request For Change.

Request for Change

Change Management is an important part of ITIL and IT Service Management in general; therefore, any change needs to start with a formal and detailed request to make the change – Request for Change (RFC). The RFC has to include a description of the change, whether there is a business need behind it, components / services that will be affected, cost estimates, risk assessment, and approval status. Get more information about ITIL V3 Change Management – at the heart of Service Management.

An example would be installation of a new application server that is required by the finance department. According to ITIL best practice, you can’t just go and purchase a new server, install some software and attach it to the corporate network. There might be not enough room in the data center racks, or not enough power supply or cooling, or there might be an unused server already available. These and numerous other reasons must be considered before implementing, but most important is that there is clear trail in the form of formal requests that will then start the change process.

Selasa, 04 Oktober 2016

12 Step implementasi ISO 20000

If you dig into the content of ISO 20000-1 (requirements for the Service Management System, i.e., SMS), questions will start to pop up. And, maybe one of the most interesting ones is certainly “How to implement all this?” Well, although it sounds complex, if you approach your implementation systematically, it shouldn’t be too complex.

In this article, I’ll explain the workflow of the ISO 20000 implementation. By making the decision to implement the standard’s requirements, you already took the first significant steps. If you would like to learn more about the main reasons why companies don’t implement ISO 20000, read the article What are the most common ISO 20000 implementation myths?

The implementation steps
blogpost-banner-20000-consultants-en

What I‘d suggest for you to do is – be systematic. This means that you should avoid ad-hoc solutions, i.e., decisions. If you change implementation direction too often, that will create chaos. Here are 12 implementation steps set in logical order:

Obtain management support – That’s your first battle, to convince your management to support the implementation. Why? They need to allow funds and, besides money, you need a strong sponsor. That should be your management.
Establish the project – That’s not a mandatory step, but it will significantly increase the efficiency of the implementation. That’s because you have a clear goal, people (and other resources), a time plan, inputs, outputs … etc. Project Management is your tool (being responsible for the implementation) to keep things under control and achieve the desired results (implementation of the SMS and certification against ISO 20000-1).
Perform assessment and gap analysis – This is one more step that is not mandatory. But, it’s highly advisable to perform a gap analysis and check your existing management system against ISO 20000 requirements. I’m pretty sure you are managing incidents or changes even without ISO 20000 in place. So, check what’s missing to comply with the standard and you will not have to do those things once again (and, not to forget – you will shorten implementation time and save resources – monetary as well as non-monetary ones).
Define scope, management intention, responsibilities – This is the phase when you need to set the foundations of your SMS and define the direction of all further activities. So, in this step you will define the scope and policy of the SMS. Read the article How to define the scope of the SMS in ISO 20000 to learn more about the scope of the SMS.
Implement support procedures – These are “non-productive” procedures, i.e., the ones that are not involved in daily operations of your SMS, but they have an indirect effect on them. These are, e.g., Procedure for document and record control, Internal audit procedure and checklist, Communication procedure … etc.
Generate process / function documentation – Now, the “party” starts. The previous steps were used to set up the management system. Now, you have to add all the processes required by the standard. You will use previous experience, i.e., knowledge you have, external help, tools, etc. for the implementation. If you decided to use the project approach, this is where it will be most beneficial.
Implement processes and/or functions – This is the same as with the previous bullets. This is the time when theory goes into practice. Additionally to (hopefully) well-prepared process documentation, your managerial skills will come to the surface.
Perform training and awareness programs – It’s important that all people involved in the SMS are aware of their tasks, and that they have the same understanding (about the purpose, i.e., goals and processes) of the SMS and “speak the same language” (e.g., when a user reports malfunctioning in some of the services – it’s an incident).
Operate the SMS – As I already mentioned, your (or the person who is the SMS manager) managerial skills are important not only during the implementation, but also afterwards. Remember, once you implement the SMS, it will support IT services used by your customers. And that can be tricky. So, you have to be good at managing those services, i.e., running the SMS.
Create the Continual Service Improvement concept – Besides the fact that it’s one of the requirements, it’s also one of the facts in everyone’s’ (including the SMS) life – changes are continual. And, that’s a good thing. It will improve the performance of the SMS and make customers happy.
Implement the Continual Service Improvement concept – Once you define your continual service improvement concept – implement it. And, keep it running even if you have to improve it (from time to time).
Conduct the internal audit – This step will tell you how well you have done so far. Find someone objective (who was not part of the implementation) to perform the audit.
Management review – this is one of the mandatory steps and your conclusion of the implementation project. And, as much as I experienced, your management would like to know what’s going on. Make good preparation (you have many requirements in the standard, and that will help you successfully prepare for the meeting).
And, that’s it. You are done. What is left are audits:
Stage 1 certification audit (Documentation review) – Before the certification audit, your certification body will visit you and check the SMS. This is your chance for an open talk where you can only gain. They will tell you what they think about your work, i.e., what you need to improve.

Stage 2 certification audit (Main audit) – This is your “big moment.” Auditor(s) will visit you and tell you that everything is perfect, isn’t it? I hope so. Anyway, this step is your final step and I hope it will verify your successfully implemented SMS.

Is that all?
More or less – yes. There could be some smaller deviations, but in essence, the above-mentioned steps will bring you to finish the implementation. You may have noticed that the first set of steps were SMS set-up related. Then came processes and functions definition and implementation. Although management confirms the desired results, continual improvement is your “destiny.” It never stops, on the contrary – it should ensure you are getting better day by day.

Use this free  Diagram of ISO 20000-1:2011 implementation process to manage your ISO 20000 implementation

Matematic


The internet had lots of great and terrible uses of math and mathematical visualizations in September 2016! This is our opportunity to applaud the winners and be confused by the blunders. Here are a few of my favorites:

1. The Gold Star goes too...
Henry Segerman for his amazing 3D mobius transformation henry_segerman_terdragon_smallartwork. In particular, I want to call out his Stereographic projections. A Stereographic projection is a mapping (that is, a function) that projects a sphere onto a plane. In the case of Segerman's art, you can shine a light from above to make the projection appear.

screen-shot-2016-10-04-at-5-49-03-am

Stereographic projections have a strong and beautiful connection to moebius transformations. Moebius (or Möbius, like the Moebius band!) transformations have a key part to play in understanding complex analysis. In particular, they encompass a particular type of mappings (functions) that map the complex plane back onto itself. Stereographic projections have the power to make something really complicated appear relatively simple. To really understand why stereographic projections are so meaningful, I recommend the "Moebius Transformations Revealed" by University of Minnesota professor, John Rogness. I promise that there are almost no equations and you'll probably learn something:

Do you need one of these in your life?

2. Terrible Use of "Mathiness"
My least favorite mathematical graphic from September has to be the Bloomberg BusinessWeek's visualization on income. Income is a really important topic. And the idea that "everybody thinks they're middle class" is an important piece of the puzzle to understanding why income inequality is growing over time.  However, I strongly disagree with the way they presented the material.

First, let's take a moment to wonder at the title: "Everybody thinks they're middle class." The visualization gives only 5 individual's opinions, and yet we presume to call this a representation of "everyone"? In high school I learned that three points make a line... but hopefully we all realize that a 5 person sample out of more than 300 million Americans does not a statistical sample make! And since, with a sample size of 5, it's not providing scientific insights, then I'm left to believe that it's goal has to be emotional insights to the nontechnical audience... which is also does poorly.

How well does it communicate with a nontechnical audience member? The article is basically one big visualization with a little bit of text:

bloomberg_middle_class This is a visualization showing the annual income of Americans. Annual income is on the x-axis against the percentage of the population who have that income level on the y-axis. How to read it? Well, one can learn that approximately 10% of the population makes $30k and about 4% of the population make $100.  Basically, this visualization is like a histogram with a million little rectangles... drawn as a continuous line. However, it obfuscates what percentage of the total population are above and below each data point.

Our society has been very focused on the top 1%. So, when I look at this graphic, I want to understand where in this visualization the top 1% is. I'm also curious about other things, like: what is the mean income of this study? It's really hard to tell when you look at the Bloomberg graphic. What I really want is something more like a Pareto chart or a cumulative distribution curve:

Here we get the histogram AND information about the cumulative values. So I can visually see when I've reached 50%, 80%, or 98% of my population sample by reading the right y-axis. Ultimately I think the Bloomberg visualization falls short of providing insights to anyone in both content and visualization. Better luck next time, Bloomberg!

3. Math Graphic of the Month
I'm guessing that if you are reading this website, you probably believe in math and science. Thus, I think you will also appreciate that science (and math!) do not have political agendas. Simply put, mathematics is a tool to learn about and communicate the facts of the world. In our social climate, I think it's important to remember that there is a division between science and state. They have different goals and different aims. And math, if math had opinions and emotions (which is doesn't!)... anyways, if it did, Math couldn't care less about which way you vote. In short, I think this is a beautiful reminder:

You can buy yours here.

Did you have a favorite experience with math on the internet in September? Share it in the comments below! Until next time, have a mathy October!

Sabtu, 01 Oktober 2016

Religion of technology



I walked into my bank a few days ago and found that the lobby had a different look. The space had been rearranged to highlight a new addition: an automated teller. While I was being helped, I overheard an exchange between a customer in line behind me and a bank worker whose new role appeared to be determining whether customers could be served by the automated teller and directing them in that direction.

She was upbeat about the automated teller and how it would speed things up for customers. The young man talking with her posed a question that occurred to me as I listened but that I'm not sure I would have had the temerity to raise: "Aren't you afraid that pretty soon they're not going to need you guys anymore?"

The bank employee was entirely unperturbed, or at least she pretended to be. "No, I'm not worried about that," she said. "I know they're going to keep us around."

I hope they do, but I don't share her optimism. I was reminded of passage from Neil Postman's Technopoly: The Surrender of Culture to Technology. Writing in the early '90s about the impact of television on education, Postman commented on teachers who enthusiastically embraced the transformations wrought by television. Believing the modern school system, and thus the teacher's career, to be the product of print culture, Postman wrote,

[...] surely, there is something perverse about schoolteachers' being enthusiastic about what is happening. Such enthusiasm always calls to my mind an image of some turn-of-the-century blacksmith who not only sings the praises of the automobile but also believes that his business will be enhanced by it. We know now that his business was not enhanced by it; it was rendered obsolete by it, as perhaps the clearheaded blacksmiths knew. What could they have done? Weep, if nothing else.

We might find it in us to weep, too, or at least acknowledge the losses, even when the gains are real and important, which they are not always. Perhaps we might also refuse a degree of personal convenience from time to time, or every time if we find it in us to do so, in order to embody principles that might at least, if nothing else, demonstrate a degree of solidarity with those who will not be the winners in the emerging digital economy.

Postman believed that computer technology created a similar situation to that of the blacksmiths, "for here too we have winners and losers."

“There can be no disputing that the computer has increased the power of large-scale organizations like the armed forces, or airline companies or banks or tax-collecting agencies. And it is equally clear that the computer is now indispensable to high-level researchers in physics and other natural sciences. But to what extend has computer technology been an advantage to the masses of people? To steelworkers, vegetable-store owners, teachers, garage mechanics, musicians, bricklayers, dentists, and most of the rest into whose lives the computer now intrudes? Their private matters have been made more accessible to powerful institutions. They are more easily tracked and controlled; are subjected to more examinations; are increasingly mystified by the decisions made about them; are often reduced to mere numerical objects. They are inundated by junk mail. They are easy targets for advertising agencies .... In a word, almost nothing that they need happens to the losers. Which is why they are the losers.

It is to be expected that the winners will encourage the losers to be enthusiastic about computer technology. That is the way of winners ... They also tell them that their lives will be conducted more efficiently. But discreetly they neglect to say from whose point of view the efficiency is warranted or what might be its costs."

The religion of technology is a secular faith, and as such it should, at least, have the decency of striking a tragic note.

Devense in memory


I suppose it is the case that we derive some pleasure from imagining ourselves to be part of a beleaguered but noble minority. This may explain why a techno-enthusiast finds it necessary to attack dystopian science fiction on the grounds that it is making us all fear technology while I find that same notion ludicrous.

Likewise, Salma Noreen closes her discussion of the internet's effect on memory with the following counsel: "Rather than worrying about what we have lost, perhaps we need to focus on what we have gained." I find that a curious note on which to close because I tend to think that we are not sufficiently concerned about what we have lost or what we may be losing as we steam full speed ahead into our technological futures. But perhaps I also am not immune to the consolations of belonging to an imagined beleaguered community of my own.

So which is it? Are we a society of techno-skeptics with brave, intrepid techno-enthusiasts on the fringes stiffening our resolve to embrace the happy technological future that can be ours for the taking? Or are we a society of techno-enthusiasts for whom the warnings of the few techno-skeptics are nothing more than a distant echo from an ever-receding past?

I suspect that latter is closer to the truth, but you can tell me how things look from where you're standing.

My main concern is to look more closely at Noreen's discussion of memory, which is a topic of abiding interest to me. "What anthropologists distinguish as ‘cultures,’" Ivan Illich wrote, "the historian of mental spaces might distinguish as different ‘memories.'" And I rather think he was right. Along similar lines, George Steiner lamented, "The catastrophic decline of memorization in our own modern education and adult resources is one of the crucial, though as yet little understood, symptoms of an afterculture.” We'll come back to more of what Steiner had to say a bit further on, but first let's consider Noreen's article.

She mentions two studies as a foil to her eventual conclusion. The first suggesting "the internet is leading to 'digital amnesia', where individuals are no longer able to retain information as a result of storing information on a digital device," and the other "that relying on digital devices to remember information is impairing our own memory systems."

"But," Noreen counsels her readers, "before we mourn this apparent loss of memory, more recent studies suggest that we may be adapting." And in what, exactly, does this adaptation consist? Noreen summarizes it this way: "Technology has changed the way we organise information so that we only remember details which are no longer available, and prioritise the location of information over the content itself."

This conclusion seems to me banal, which is not to say that it is incorrect. It amounts to saying that we will not remember what we do not believe we need to remember and that, when we have outsourced our memory, we will take some care to learn how we might access it in the future.

Of course, when the Google myth dominates a society, will we believe that there is anything at all that we ought to commit to memory? The Google myth in this case is belief that the every conceivable bit of knowledge that we could ever possibly desire is just a Google search away.

The sort of analysis Noreen offers, which is not uncommon, is based on an assumption we should examine more closely and also leaves a critical consideration unaddressed.

The assumption is that there are no distinctions within the category of memory. All memories are assumed to be discreet facts the sort of which one would need to know in order to do well on Jeopardy. But this assumption ignores the diversity of what we call memories and the diversity of functions to which memory is put. Here is how I framed the matter some years back:

All of this leads me to ask, What assumptions are at play that make it immediately plausible for so many to believe that we can move from internalized memory to externalized memory without remainder?  It would seem, at least, that the ground was prepared by an earlier reduction of knowledge to information or data.  Only when we view knowledge as the mere aggregation of discreet bits of data, can we then believe that it makes little difference whether that data is stored in the mind or in a database.

We seem to be approaching knowledge as if life were a game of Jeopardy which is played well by merely being able to access trivial knowledge at random.  What is lost is the associational dimension of knowledge which constructs meaning and understanding by relating one thing to another and not merely by aggregating data.  This form of knowledge, which we might call metaphorical or analogical, allows us to experience life with the ability to “understand in light of," to perceive through a rich store of knowledge and experience that allows us to see and make connections that richly texture and layer our experience of reality.

But this understanding of memory seems largely absent from the sorts of studies that are frequently cited in discussions of offloaded or outsourced memory.

As for the unaddressed critical consideration, if we grant that we must all outsource or externalize some of our memory, and that it may even be admittedly advantageous to do so, how do we make qualitative judgments about the memory that we can outsource and the memory we should on principle internalize (if we even allow for the latter possibility)?

Here we might take a cue from the religious practices of Jews, Christians, and Muslims, who have long made the memorization of Scripture a central component of their respective forms of piety. Here's a bit more from Steiner commenting on what can be known about early modern literacy:

Scriptural and, in a wider sense, religious literacy ran strong, particularly in Protestant lands. The Authorized Version and Luther’s Bible carried in their wake a rich tradition of symbolic, allusive, and syntactic awareness. Absorbed in childhood, the Book of Common Prayer, the Lutheran hymnal and psalmody cannot but have marked a broad compass of mental life with their exact, stylized articulateness and music of thought. Habits of communication and schooling, moreover, sprang directly from the concentration of memory. So much was learned and known by heart — a term beautifully apposite to the organic, inward presentness of meaning and spoken being within the individual spirit.

Learned by heart--a beautifully apt phrase, indeed. Interestingly, this is an aspect of religious practice that, while remaining relatively consistent across the transition from oral to literate society, appears to be succumbing to the pressures of the Google myth, at least among Protestants. If I have an app that let's me instantly access any passage of my sacred text, in any of a hundred different translations, why would I bother to memorize any of it.

The answer, of course, best and perhaps only learned by personal experience, is that there is a qualitative difference between the "organic, inward presentness of meaning" that Steiner describes and merely knowing that I know how to find a text if I were so inclined. The same is true for how we might come to know a poem. But the Google myth, and the studies that examine it, seem to know nothing of that qualitative difference, or, at least, they choose to bracket it.

I should note in passing that much of what I have recently written about attention is also relevant here. Distraction is the natural state of someone who has no goal that might otherwise command or direct their attention. Likewise, forgetfulness is the natural state of someone who has no compelling reason to commit something to memory. At the heart of both states may be the liberated individual will yielded by modernity. Distraction and forgetfulness seem both to stem from a refusal to acknowledge an order of knowing that is outside of and independent of the solitary self. To discipline our attention and to learn something by heart is, in no small measure, to submit the self to something beyond its own whims and prerogatives.

So, then, we might say that one of the enduring consequences of new forms of externalized memory is not only that it alters the quantity of what is committed to memory but it also reconfigures the meaning and value that we assign to both the work of remembering and to what is remembered. In this way we begin to see why Illich believed that changing memories amounted to changing cultures. This is also why should consider that Plato's Socrates was on to something more than critics give him credit for when he criticized writing for how it would affect memory, which was for Plato much more than merely the ability to recall discreet bits of data.

This last point brings me, finally, to an excellent discussion of these matters by John Danaher. Danaher is always clear and meticulous in his writing and I commend his blog, Philosophical Disquisitions, to you. In this post, he explores the externalization of memory via a discussion of a helpful distinction offered by David Krakauer of the Santa Fe Institute. Here is Danaher's summary of the distinction between two different types of cognitive artifacts, or artifacts we think with:

Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.

Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.

Danaher critically interacts with Krakauer's distinction, but finds it useful. It is useful because, like Albert Borgmann's work, it offers to us concepts and categories by which we might begin to evaluate the sorts of trade-offs we must make when deciding what technologies we will use and how.

Also of interest is Danaher's discussion of cognitive ecology. Invoking earlier work by Donald Norman, Danaher explains that "competitive cognitive artifacts don’t just replace or undermine one cognitive task. They change the cognitive ecology, i.e. the social and physical environment in which we must perform cognitive tasks." His critical consideration of the concept of cognitive ecology brings him around to the wonderful work Evan Selinger has been doing on the problem of technological outsourcing, work that I've cited here on more than a few occasions. I commend to you Danaher's post for both its content and its method. It will be more useful to you than the vast majority of commentary you might otherwise encounter on this subject.

I'll leave you with the following observation by the filmmaker Luis Bunuel: “Our memory is our coherence, our reason, our feeling, even our action. Without it, we are nothing.” Let us take some care and give some thought, then, to how our tools shape our remembering.


Presidential debates and social media


I've chosen to take my debates on Twitter. I've done so mostly in the interest of exploring what difference it might make to take in the debates on social media rather than on television.

Of course, the first thing to know is that the first televised debate, the famous 1960 Kennedy/Nixon debate, is something of a canonical case study in media studies. Most of you, I suspect, have heard at some point about how polls conducted after the debate found that those who listened on the radio were inclined to think that Nixon had gotten the better of Kennedy while those who watched the debate on television were inclined to think that Kennedy had won the day.

As it turns out, this is something like a political urban legend. At the very least, it is fair to say that the facts of the case are somewhat more complicated. Media scholar, W. Joseph Campbell of American University, leaning heavily on a 1987 article by David L. Vancil and Sue D. Pendell, has shown that the evidence for viewer-listener disagreement is surprisingly scant and suspect. What little empirical evidence did point to a disparity between viewers and listeners depended on less than rigorous methodology.

Campbell, who's written a book on media myths, is mostly interested in debunking the idea that viewer-listener disagreement was responsible for the outcome of the election. His point, well-taken, is simply that the truth of the matter is more complicated. With this we can, of course, agree. It would be a mistake, however, to write off the consequences over time of the shift in popular media. We may, for instance, take the first Clinton/Trump debate and contrast it to the Kennedy/Nixon debate and also to the famous Lincoln/Douglas debates. It would be hard to maintain that nothing has changed. But what is the cause of that change?

Does the evolution of media technology alone account for it? Probably not, if only because in the realm of human affairs we are unlikely to ever encounter singular causes. The emergence of new media itself, for instance, requires explanation, which would lead us to consider economic, scientific, and political factors. However, it would be impossible to discount how new media shape, if nothing else, the conditions under which political discourse evolves.

Not surprisingly, I turned to the late Neil Postman for some further insight. Indeed, I've taken of late to suggesting that the hashtag for 2016, should we want one, ought to be #NeilPostmanWasRight. This was a sentiment that I initially encountered in a fine post by Adam Elkus on the Internet culture wars. During the course of his analysis, Elkus wrote, "And at this point you accept that Neil Postman was right and that you were wrong."

I confess that I rather agreed with Postman all along, and on another occasion I might take the time to write about how well Postman's writing about technology holds up. Here, I'll only cite this statement of his argument in Amusing Ourselves to Death:

"My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content--in a phrase, by creating new forms of truth-telling."

This is the argument Postman presents in a chapter aptly title "Media as Epistemology." Postman went on to add, admirably, "I am no relativist in this matter., and that I believe the epistemology created by television not only is inferior to a print-based epistemology but is dangerous and absurdist."

Let us make a couple of supporting observations in passing, neither of which is original or particularly profound. First, what is it that we remember about the televised debates prior to the age of social media? Do any of us, old enough to remember, recall anything other than an adroitly delivered one-liner? And you know exactly which I have in mind already. Go ahead, before reading any further, call to mind your top three debate memories. Tell me if at least one of these is not among the three.

Reagan, when asked about his age, joking that we would not make an issue out of his opponent's youth and inexperience.

Sen. Bentsen reminding Dan Quayle that he is no Jack Kennedy.

Admiral Stockdale, seemingly lost on stage, wondering, "Who am I? Why am I here?"

So how did we do? Did we have at least one of those in common? Here's my point: what is memorable and what counts for "winning" or "losing" a debate in the age of television had precious little to do with the substance of an argument. It had everything to do with style and image. Again, I claim no great insight in saying as much. In fact, this is, I presume, conventional wisdom by now.

Consider as well an example fresh from the first Clinton/Trump debate.

.@chucktodd: #debatenight exposed Trump's lack of preparation, but Clinton seemed over-prepared at times.

— Meet the Press (@MeetThePress) September 27, 2016

You tell me what "over-prepared" could possibly mean. Moreover, you tell me if that was a charge that you can even begin to imagine being leveled against Lincoln or Douglas or, for that matter, Nixon or Kennedy.

Let's let Marshall McLuhan take a shot at explaining what Mr. Todd might possibly have meant.

I know, you're not going to watch the whole thing. Who's got the time? [#NeilPostmanWasRight] But if you did you would hear McLuhan explaining why the 1976 Carter/Ford debate was an "atrocious misuse of the TV medium" and “the most stupid arrangement of any debate in the history of debating." Chiefly, the content and the medium were mismatched. The style of debating both candidates embodied was ill-suited for what television prized, something approaching casual ease, warmth, and informality. Being unable to achieve that style means "losing" the debate regardless of how well you knew your stuff. As McLuhan tells Tom Brokaw, "You're assuming that what these people say is important. All that matters is that they hold that audience on their image."

Incidentally, writing in Slate about this clip in 2011, David Haglund wrote, "What seems most incredible to me about this cultural artifact is that there was ever a time when The Today Show would spend ten uninterrupted minutes talking about the presidential debates with a media theorist." [#NeilPostmanWasRight]

So where does this leave us? Does social media, like television in Postman's phrasing, present us with a new epistemology? Perhaps. We keep hearing a lot of talk about post-factual politics. If that describes our political climate, and I have little reason to doubt as much, then we did not suddenly land here after the advent of social media or the Internet. Facts, or simply the truth, has been fighting a rear-guard action for some time now.

I will make one passing observation, though, about the dynamics of following a debate on Twitter. While the entertainment on offer in the era of television was the thrill of hearing the perfect zinger, social media encourages each of us to become part of the action. Reading tweet after tweet of running commentary on the debate, from left, right, and center, I was struck by the near unanimity of tone: either snark or righteous indignation. Or, better, the near unanimity of apparent intent. No one, it seems to me, was trying to persuade anybody of anything. Insofar as I could discern a motive factor I might on the one hand suggest something like catharsis, a satisfying expunging of emotions. On the other, the desire to land the zinger ourselves. To compose that perfect tweet that would suddenly go viral and garner thousands of retweets. I saw more than a few cross my timeline--some from accounts with thousands and thousands of followers and others from accounts with a meager few hundred--and I felt that it was not unlike watching someone hit the jackpot in the slot machine next to me. Just enough incentive to keep me playing.

A citizen may have attended a Lincoln/Douglas debate to be informed and also, in part, to be entertained. The consumer of the television era tuned into a debate ostensibly to be informed, but in reality to be entertained. The prosumer of the digital age aspires to do the entertaining.

7 way


Most of us are familiar with some version of what's popularly known as the Golden Rule. Here is the formulation attributed to Jesus in Matthew's Gospel: "So whatever you wish that others would do to you, do also to them" (ESV). However familiar we might be with this moral principle in the abstract, we seem to be unaware of all the particular situations to which it could be applied.

For instance, consider this less than elegant imperative that I offer my students: read and write as you would want to be read and written to. The underlying principles are fairly straightforward: read the work of others with the same generosity of spirit with which you would want your own writing considered. Likewise, write with the same care that you would have others take when they write what they would have you read.

Along similar lines, I'll attempt to put these principles, and a few others like them, a bit more succinctly and apply them to our online interactions. These principles are chiefly concerned with the internet as a space for public discourse, and the premise in each case is that online we are or ought to be our brother and sister's keeper.

1. Read generously.

In A History of Reading, Alberto Manguel wrote, "All writing depends upon the generosity of the reader." This can hardly be improved upon. The point is not that we should simply accept all that we read. Rather, the point is that understanding should ordinarily precede criticism, and understanding requires a measure of sympathy. If not sympathy, then at least a genuine openness of mind, a willingness to enter into the mental world of the writer. Knowing how much bias we ordinarily bring to the work of understanding, I would go so far as to say that we should strive to read our enemies as if they were our friends and our friends as if they were our enemies.

2. Write carefully.

By which I mean, quite literally, that our writing should be full of care, particularly for those who would read it. The philosopher Stephen Toulmin once observed, "The effort the writer does not put into writing, the reader has to put into reading." Before we complain about being misunderstood, we should ask ourselves if we've done all we can to make ourselves understood. This is no easy task, to be sure. Communicating with nothing but these precious little marks on a screen is a precarious business. But if we are going to attempt it, let us at least do it with as much care as we can muster.

3. Cite abundantly.

Give credit where credit is due. For instance, I'll note here that I owe my knowledge of the line by Manguel to Alan Jacob's fine little book, The Pleasures of Reading in an Age of Distraction. When we have learned something from others, we should do them the courtesy of acknowledging their labors.

4. Link mindfully.

If we are going to be a relay in the network, let us at least do so with a view to making our little corner of the network a bit more truthful. So much of what comes across our feeds is inaccurate, misleading, or worse. It is true that we cannot all be full-time fact checkers, of course. But it doesn't take too much digging to verify the validity of a statistic, a historical claim, or the attribution of authorship. If it raises a red-flag and you don't have the time or desire to scrutinize the claims, then don't pass it along even if (especially if) it supports your "side" of things. We can all do better than embracing misinformation as we fight for our causes. Remember, too, that knowingly floating misleading half-truths is the work of serpents and witches. "Tell the truth, and shame the devil!" Shakespeare's Hotspur declares. Let us do, likewise. Link to the truth, and shame the devil!

5. Share sparingly.

I am thinking here about the claim our sharing makes upon our neighbor's attention. Our attention is a precious and limited resources; we should guard it vigilantly and we should take some care to keep from taxing the attention of others. It is true that our action alone may not make much of a difference; this could be an ineffectual strategy. And, perhaps, it is also true that those to whom we are connected online may not even see matters as we do. Somehow, though, I think we should learn to take some responsibility for the demands we make of the attention of others.

6. Correct graciously ...

... if you must. Sometimes it is enough that we simply do not perpetuate a falsehood; sometimes more may be required of us. Let me be clear about what I mean. I'm not exactly a big fan of the xkcd comics, but this old classic is still valuable.

duty_calls

Yes, someone is always wrong on the Internet, and I'm not suggesting that we go on a crusade against error. Letting it die upon reaching us if often good enough. Occasionally, however, it may be that failing to engage online is a form of morally questionable silence. Use your judgement, lay aside all self-righteousness, and proceed with as much grace as the situation allows. And, of course, let us hear the criticisms others present to us in a similar spirit, remembering that we need not always make reply. Silence in the face of evil may be a moral failure; silence before our critics is sometimes the better part of wisdom.

7. Respect, always.

Respect the privacy of others, respect their time, respect the dignity inherent in their humanity. Do not give offense needlessly.