Archive for the 'computer science' category

Computer Science, Web of Science, Scopus, conferences, citations, oh my!

The standard commercial library citation tools, Web of Science (including their newish Proceedings product) and Scopus, have always been a bit iffy for computer science. That's mostly because computer science scholarship is largely conference-based rather than journal-based and those tools are tended to massively privilege the journal literature rather than conferences.

Of course, these citation tools are problematic at best for judging scholarly impact in any field, using them for CS is even more so. The flaws are really amplified.

A recent article in the Communications of the ACM goes through the problems in a bit more detail: Invisible Work in Standard Bibliometric Evaluation of Computer Science by Jacques Wainer, Cleo Billa and Siome Goldenstein.

A bit about why they did the research:

Multidisciplinary committees routinely make strategic decisions, rule on subjects ranging from faculty promotion to grant awards, and rank and compare scientists. Though they may use different criteria for evaluations in subjects as disparate as history and medicine, it seems logical for academic institutions to group together mathematics, computer science, and electrical engineering for comparative evaluation by these committees.

*snip*

Computer scientists have an intuitive understanding that these assessment criteria are unfair to CS as a whole. Here, we provide some quantitative evidence of such unfairness.

A bit about what they did:

We define researchers' invisible work as an estimation of all their scientific publications not indexed by WoS or Scopus. Thus, the work is not counted as part of scientists' standard bibliometric evaluations. To compare CS invisible work to that of physics, mathematics, and electrical engineering, we generated a controlled sample of 50 scientists from each of these fields from top U.S. universities and focused on the distribution of invisible work rate for each of them using statistical tests.

We defined invisible work as the difference between number of publications scientists themselves list on their personal Web pages and/or publicly available curriculum vitae (we call their "listed production") and number of publications listed for the same scientists in WoS and Scopus. The invisible work rate is the invisible work divided by number of listed production. Note that our evaluation of invisible work rate is an approximation of the true invisible work rate because the listed production of particular scientists may not include all of their publications.

A bit about what they found:

When CS is classified as a science (as it was in the U.S. News & World Report survey), the standard bibliometric evaluations are unfair to CS as a whole. On average, 66% of the published work of a computer scientist is not accounted for in the standard WoS indexing service, a much higher rate than for scientists in math and physics. Using the new conference-proceedings service from WoS, the average invisible work rate for CS is 46%, which is higher than for the other areas of scientific research. Using Scopus, the average rate is 33%, which is higher than for both EE and physics.

CS researchers' practice of publishing in conference proceedings is an important aspect of the invisible work rate of CS. On average, 82% of conference publications are not indexed in WoS compared to 47% not indexed in WoS-P and 32% not indexed in Scopus.

And a bit about what they suggest:

Faced with multidisciplinary evaluation criteria, computer scientists should lobby for WoS-P, or better, Scopus. Understanding the limitations of the bibliometric services will help a multidisciplinary committee better evaluate CS researchers.

There's quite a bit more in the original article, about what their sample biases might be, some other potential citation services and other issues.

What do I take away from this? Using citation metrics as a measure of scientific impact is suspect at best. In particular (and the authors make this point), trying to use one measure or kind of measure across different disciplines is even more problematic.

Let's just start from scratch. But more on that in another post.

No responses yet

Issues in Science & Technology Librarianship, Winter 2011

As usual, a bunch of great new articles from the most recent ISTL!

No responses yet

Friday Fun: Top 10 truly bizarre programming languages

Feb 18 2011 Published by under computer science, friday fun

Twitter brings us some truly wonderful and, yes, bizarre things. I saw this one a few days ago via Vitor Pamplona and thought it was too good to pass up.

Anyways, here's the story from the original Listverse post, Top 10 truly bizarre programming languages:

This is a list of some of the most bizarre programming languages you will ever see. These types of languages are usually called "Esoteric Programming Languages". An esoteric programming language (sometimes shortened to esolang) is a computer programming language designed either as a test of the boundaries of programming language design, to experiment with weird ideas or simply as a joke, rather than for practical reasons. There is usually no intention of the language being adopted for real-world programming. Such languages are often popular among hackers and hobbyists.

Usability is rarely a high priority for such languages; often quite the opposite. The usual aim is to remove or replace conventional language features while still maintaining a language that is Turing-complete, or even one for which the computational class is unknown.

The list is truly fascinating and hilarious.

My favourite of the bunch was the Chef programming language:

Chef, designed by David Morgan-Mar in 2002, is an esoteric programming language in which programs look like cooking recipes. The variables tend to be named after basic foodstuffs, the stacks are called "mixing bowls" or "baking dishes" and the instructions for manipulating them "mix", "stir", etc. The ingredients in a mixing bowl or baking dish are ordered "like a stack of pancakes".

According to the Chef Home Page, the design principles for Chef are:

- Program recipes should not only generate valid output, but be easy to prepare and delicious.
- Recipes may appeal to cooks with different budgets.
- Recipes will be metric, but may use traditional cooking measures such as cups and tablespoons.

With the following listed as the traditional "hello world" beginner programming example for the language:

Hello World Souffle.

Ingredients.
72 g haricot beans
101 eggs
108 g lard
111 cups oil
32 zucchinis
119 ml water
114 g red salmon
100 g dijon mustard
33 potatoes

Method.
Put potatoes into the mixing bowl.
Put dijon mustard into the mixing bowl.
Put lard into the mixing bowl.
Put red salmon into the mixing bowl.
Put oil into the mixing bowl.
Put water into the mixing bowl.
Put zucchinis into the mixing bowl.
Put oil into the mixing bowl.
Put lard into the mixing bowl.
Put lard into the mixing bowl.
Put eggs into the mixing bowl.
Put haricot beans into the mixing bowl.
Liquefy contents of the mixing bowl.
Pour contents of the mixing bowl into the baking dish.

Serves 1.

Yeah, right. I'd like to see someone reprogram the space shuttle control programs in Chef.

Frankly, I'm not brave enough to test it out using the links to interpreters in the Chef page. Are you?

2 responses so far

From the Archives: Dreaming in Code by Scott Rosenberg

I have a whole pile of science-y book reviews on two of my older blogs, here and here. Both of those blogs have now been largely superseded by or merged into this one. So I'm going to be slowly moving the relevant reviews over here. I'll mostly be doing the posts one or two per weekend and I'll occasionally be merging two or more shorter reviews into one post here.

This one, of Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software, is from August 9, 2007.

=======

Every organization relies on software these days. Big custom systems, shrink wrapped commercial software, all the various protocols and programs keeping the Net running. Big organizations, small organizations, tech companies of course, libraries in particular are relying on the fruits of software developers mental labours more and more. And with the rise of Web 2.0 in libraries and educational institutions, our reliance on our programmers will only get more pronounced. But how much do we really understand about the art of software development and the strange and wonderful habits of programmers, systems analysts and all the rest of the software bestiary?

Not much, it seems. And that's where this fascinating insider account a a high-profile open source software project comes in. Salon.com co-founder and author Scott Rosenberg spent three years as a fly on the way on Mitch Kapor's project to create the ultimate Personal Information Manager (PIM), Chandler. Kapor's project was highly idealistic from the very beginning; the idea was that he would use some of his software-boom fortune to finance a project to make every one's lives easier: a PIM that is flexible, sharable and open, able to handle calendaring, email, note taking and events. Unfortunately, the project was also cursed with design difficulties and numerous delays, with a schedule that stretched out from one year to two and three years and beyond (and not even implemented today). The book includes a colourful cast of both obscure and well-known software luminaries (like Andy Hertzfeld), and goes beyond merely recounting the ups and downs of Chandler but also offers a kind of history of attempts to organize and systematize software development. Name-checking such great software engineering writers as Frederick Brooks, Rosenberg talks about the whys and wherefores of structured programming, object orientation and others. Many chapters mix details of the vagaries of the Chandler project with relevant discussions of theoretical topics in software engineering (such as trying to create truly reusable software modules) with more philosophical musings on the art of software development. Most of all, Rosenberg places us firmly inside the workings of a programming project from hell, complete with gory details, tales from the historical trenches and a bit of that fantastic theoretical discussion on why software is so hard. (So, what's it really like being stuck in the programming project from hell? Trust me, I've been there and this is a pretty good example of the real thing.)

There are a couple of really good bits that really stood out for me in this book, bits that resonated with my own experiences managing and developing software. On page 54 he has a discussion of death march projects and the optimism/pessimism dichotomy that all programmers live with and obsess with every day. Having done a couple of death marches characterized by such extremes, it really resonated with me. On page 75, he begins a discussion various programming languages and the almost religious zeal most programmers have for their favourite ones - I was a big fan of Fortran as a young programmer. On page 274, Rosenberg has a telling comment about programmers' historical blindness, their inability to learn from their mistakes, to use the literature to learn from other's mistakes. I like the way he puts it: "It's tempting to recommend these [pioneering software engineering] NATO reports be required reading for all programmers and their managers. But, as Joel Spolsky says, most programmers don't read much about their own discipline. That leaves them trapped in infinite loops of self-ignorance." I like to think that as a librarian collecting the literature of software engineering, I can help in a small way to make programmers more aware of their past.

On a lighter note, I also like the joke that Rosenberg puts on page 275-276:

A Software Engineer, a Hardware Engineer, and a Departmental Manager were on their way to a meeting in Switzerland. They were driving down a steep mountain road when suddenly the brakes on their car failed. The car careened almost out of control down the road, bouncing off the crash barriers until it miraculously ground to a halt scraping along the mountainside. The car's occupants, shaken but unhurt, now had a problem: They were stuck halfway down a mountain in a car with no brakes. What were they to do?

"I know," said the Departmental Manager. "Let's have a meeting, propose a Vision, formulate a Mission Statement, define some Goals, and by a process of Continuous Improvement find a solution to the Critical Problems, and we can be on our way."

"No, no," said the Hardware Engineer. "that will take far too long, and, besides, that method has never worked before. I've got my Swiss Army knife with me, and in no time at all I can strip down the car's braking system, isolate the fault, fix it, and we can be on our way."

"Well," said the Software Engineer, "before we do anything, I think we should push the car back up the road and see if it happens again."

I'm going to use this joke when I do IL sessions for CS and Engineering grad and undergrad students, and maybe even to break the ice at a departmental meeting.

A great book, an insider view of software development, a real insight into how programmers think and work and how software projects grow and evolve, sometimes how they careen out of control. So, who would I recommend this book for? A number of different constituencies would find this book useful and entertaining.

  • IT Managers would find this book very useful for its insights into the personalities of programmers as well as for its history of failed attempts to make a purely predictable engineering discipline out of programming.
  • Programmers would find this book terrific, seeing a lot of their own eccentricities in the many stories. As well, programmers would get a lot of insights into their pointy-haired bosses attempts to turn them into engineers rather than the free-spirited hacker-artists they see themselves as.
  • Families of the either of the two above groups will get valuable insight into the slightly deranged members of their families, their joys, obsessions and frustrations.
  • People that support or employ software developers or managers, such as scitech librarians, HR people in tech firms, venture capitalists in software firms. They will hopefully come to understand how and why software projects are created and sometimes crash and burn. Not to mention how to mentor and encourage developers to take advantage of what is known to improve productivity. The other books and articles listed in the notes are also a treasure trove of further exploration and information. I hate it when books like this don't have a proper bibliography - it makes it a lot more trouble to sift through the notes later on for further reading.
  • And really, anybody that uses software of any kind. And since basically everyone uses some sort of software these days, just about anyone would really appreciate this book. Understanding how the knowledge economy and the Internet boom is built from the ground up is certainly enlightening and important. You'll never see a bank machine, interact with a big company's insane internal systems procedures or even use a simple web application the same way. Understanding the challenges involved in getting these systems even close to right and the inevitability of their imperfections is an important revelation in the modern world.

Rosenberg, Scott. Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software. New York: Crown, 2007. 400pp. ISBN-13: 978-1400082476

4 responses so far

From the Archives: Reviews of Cory Doctorow and Mafiaboy

I have a whole pile of science-y book reviews on two of my older blogs, here and here. Both of those blogs have now been largely superseded by or merged into this one. So I'm going to be slowly moving the relevant reviews over here. I'll mostly be doing the posts one or two per weekend and I'll occasionally be merging two or more shorter reviews into one post here.

This post, from April 4, 2009, covered two books:

=======

I'm reviewing these two books together for two reasons. First of all, I don't feel the need to go on at great length about either of them. Secondly, I think that they're related -- they both touch on the free, open and ungoverned (ungovernable?) nature of the Internet. One is a white hat treatment and the other, black hat. Or perhaps, many will think of both of these books representing a black hat perspective, that perhaps both these books represent the worst that the Internet has brought to modern society. The Web promotes openness and freedom. Generally, we consider both of those qualities to be positive. Certainly, Cory Doctorow would be a prime advocate of openness on the Web. On the other hand, the freedom that the Internet provides can also be cover for those that would exploit weakness and take advantage of others. Certainly, the story of Mafiaboy epitomizes the dark side of hacker culture.

Cory Doctorow's Content is a colletction of Doctorow's various essays on copyright and open content. collected from a bunch of different places, this is a stimulating and thought-provoking collection. Of course, every single essay is available for free on the net. An interesting conundrum, of course, is that if it's all available for free on the Web then why did I buy it? Most of all, I really like the idea of sending a little cash to the artists and thinkers whose work moves and inspires me. So, yes, I still buy books and CDs and pay to see movies in the theatre.

Never mind what you should pay for this book, who should read it? Well, if you're a copyright minimalist it's preaching to the choir. You'll agree that information wants to be free and that you the best business model for artists is to give stuff away that's easily copied and sell stuff that isn't. In other words, in a world where bits can be easily copied for virtually no cost, you have to be able to actually sell something other than pure content to make a living -- like experience. If you're a copyright maximalist, well, Doctorow is the anti-christ and you probably won't really appreciate the book. If, like most, you're in the middle, then this book is for you. Doctorow really makes a very strong and very persuasive case for his point of view, that . It's compelling and hard to ignore. You might not end up agreeing with everything (I certainly don't), but he will definitely win you over on a lot of points.

If there's one thing that detracts from Doctorow's ability to make his case, it's his attitude. Sometimes he's just too cocky, too arrogant, too sure that he's right and you're dead wrong. There's no agree-to-disagree is his world, it's my-way-or-the-highway. Take his opinion of opera:

The idea of a 60-minute album is as weird in the Internet era as the idea of sitting through 15 hours of Der Ring des Nibelungen was 20 years ago. There are some anachronisms who love their long-form opera, but the real action is in the more fluid stuff that can slither around on hot wax -- and now the superfluid droplets of MP3s and samples. Opera survives, but it is a tiny sliver of a much bigger, looser music market. The future composts the past: old operas get mounted for living anachronisms; Andrew Lloyd Webber picks up the rest of the business.

My only reaction is that Doctorow is completely wrong in this. In fact, he really contradicts the main point of the long tail that Internet gurus are so adamant about. The new media landscape doesn't make 60 minute operas less interesting and relevant. It makes them more so -- finally able to find their niche in the long tail of human artistic expression. People that like opera can enjoy and obsess over it. People that don't, well, can listen to whatever they like. The point isn't Doctorow's rather juvenile assertion that some particular type of artistic expression is somehow not worthy, the point is that the Internet enables every kind of artistic expression is a way that was not possible before.

In any case, that was one of the few false notes (all the same kind of thing) in an otherwise excellent book. Read it and disagree, engage and enrage. But it's too important to ignore. I would recommend this book to any academic or public library as well as to anyone interested in the future of content in a fragmented and radically shifting online landscape.

And let's take a look at Michael Calce and Craig Silverman's account of Calce's life as Internet hacker Mafiaboy. Its a fascinating story of a Montreal-area teen and how he got involved in the world of hacking and ended up launching a couple of big denial of service attacks on some prominent web sites like Yahoo! and CNN. Calce tells the story of how he got involved in the hacking underworld as well as how he was caught, the jail time he served as well as how he's reformed and is using his obvious computing gifts for good instead of evil.

A couple of interesting points, though. Especially in his telling of the early part of the story, Calce comes off as a bit arrogant and clueless about the seriousness of his actions, not really showing much empathy. I find this interesting because while the later chapters make it pretty clear that he's grown up and left those feelings mostly behind, there are still glimpses and insights into the teenager that caused the havoc. We see the macho reputation building, the bragging and the power trips but not really from the point an introspective point of view. I guess it's hard to expect anyone to write that kind of book.

A great story, well told, well worth reading and thinking about. I would recommend it to any academic or public library interested in the way the Internet is shaping our society.

Calce, Michael with Craig Silverman. Mafiaboy: How I Cracked the Internet and Why It's Still Broken. Toronto: Viking Canada, 2008. 277pp.

Doctorow, Cory. Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future San Francisco: Tachyon, 2008. 213pp.

No responses yet

Friday Fun: Top 50 Programming Quotes of All Time

Ok, this is just plain hysterical. And insightful. And both insightfully hysterical and hysterically insightful.

Enjoy.

Here's a taste, read the whole thing for yourself.

50. "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to build bigger and better idiots. So far, the universe is winning."
- Rick Cook

38. "The use of COBOL cripples the mind; its teaching should therefore be regarded as a criminal offense."
- E.W. Dijkstra

17. "If McDonalds were run like a software company, one out of every hundred Big Macs would give you food poisoning, and the response would be, 'We're sorry, here's a coupon for two more.' "
- Mark Minasi

6. "The trouble with programmers is that you can never tell what a programmer is doing until it's too late."
- Seymour Cray

3 responses so far

Open Science Digital Computation Research

Two recent announcements that are worth noting here.

The first is for Digital Science, a Macmillan / Nature Publishing Group project involving some of the usual science online suspects like Timo Hannay and Kaitlin Thaney and some others in a really dynamic-looking multi-disciplinary team.

The press release is here and the about page here.

Digital Science provides software and information to support researchers and research administrators in their everyday work, with the ultimate aim of making science more productive through the use of technology. As well as developing our own solutions, we also invest in promising start-ups and other partners, working closely with them to help them realise their full potential.

*snip*

The activities of Digital Science combine in-house software development and domain expertise with technologies and services created in collaboration with a range of world-class partners, including academic research groups, start-up businesses and established companies.

This is a bit on what they're trying to accomplish:

Digital technologies are transforming all areas of our lives -- commerce, education, the arts -- and science, too, is changing. The web has already revolutionised the way we produce, publish and disseminate scientific knowledge. It also provides fundamental new opportunities for processing, annotating, curating, querying and sharing information, as well as organising our laboratories and the research process itself.

But we're still much nearer the beginning of this journey than the end. For all the recent progress, we're not even close to fully realising the potential of information technology to accelerate the discovery and and application of scientific insights. At Digital Science, we aim to to contribute to these important developments by providing tools and services that will make researchers more productive through state-of-the-art software.

In that sense we're very different to the content-based businesses normally associated with a publishing company like Macmillan. But we will also be working closely with scientific publishers -- not least our colleagues at Nature Publishing Group -- to make the most of their scientific expertise and carefully curated content, because only by combining expert human judgement with the best technology can we hope to have the impact that we seek.

Very interesting and very ambitious. It'll be worth watching to see what they come up with in the next year or two.

The other project worth mentioning is a new BioMed Central open access journal, Open Research Computation, with Cameron Neylon as editor-in-chief and a whole host of usual suspects on the Editorial Board.

There's lots of information on the Instructions for Authors, About page, FAQ:

Aims and scope
Open Research Computation publishes peer reviewed articles that describe the development, capacities, and uses of software designed for use by researchers in any field. Submissions relating to software for use in any area of research are welcome as are articles dealing with algorithms, useful code snippets, as well as large applications or web services, and libraries. Open Research Computation differs from other journals with a software focus in its requirement for the software source code to be made available under an Open Source Initiative compliant license, and in its assessment of the quality of documentation and testing of the software. In addition to articles describing software Open Research Computation also welcomes submissions that review or describe developments relating to software based tools for research. These include, but are not limited to, reviews or proposals for standards, discussion of best practice in research software development, educational and support resources and tools for researchers that develop or use software based tools.

Cameron also has a blog post talking about the new initiative:

Computation lies at the heart of all modern research. Whether it is the massive scale of LHC data analysis or the use of Excel to graph a small data set. From the hundreds of thousands of web users that contribute to Galaxy Zoo to the solitary chemist reprocessing an NMR spectrum we rely absolutely on billions of lines of code that we never think to look at. Some of this code is in massive commercial applications used by hundreds of millions of people, well beyond the research community. Sometimes it is a few lines of shell script or Perl that will only ever be used by the one person who wrote it. At both extremes we rely on the code.

We also rely on the people who write, develop, design, test, and deploy this code. In the context of many research communities the rewards for focusing on software development, of becoming the domain expert, are limited. And the cost in terms of time and resource to build software of the highest quality, using the best of modern development techniques, is not repaid in ways that advance a researcher's career. The bottom line is that researchers need papers to advance, and they need papers in journals that are highly regarded, and (say it softly) have respectable impact factors. I don't like it. Many others don't like it. But that is the reality on the ground today, and we do younger researchers in particular a disservice if we pretend it is not the case.

Open Research Computation is a journal that seeks to directly address the issues that computational researchers have. It is, at its heart, a conventional peer reviewed journal dedicated to papers that discuss specific pieces of software or services. A few journals now exist in this space that either publish software articles or have a focus on software. Where ORC will differ is in its intense focus on the standards to which software is developed, the reproducibility of the results it generates, and the accessibility of the software to analysis, critique and re-use.

It's a bit of synchronicity that these two announcements came at around the same time. The ground is shifting in the way science is done and the way it is reported. Both these projects represent (re)evolutionary steps along the path towards greater and greater computational influence on scientific practice.

Neither project seems to have any direct librar* involvement, although I can think of one or two librarians who'd fit in nicely on ORC's editorial board. I can also see that libraries could be partners with Digital Science in promoting and implementing the kinds of products that they'll likely be experimenting with.

We live in interesting times and I can hardly wait to see what happens.

No responses yet

Exploring Open Science with Computer Science undergrads

York University Computer Science & Engineering professor Anestis Toptsis was kind enough recently to invite me to speak to his CSE 3000 Professional Practice in Computing class.

He gave me two lecture sessions this term, one to talk about library-ish stuff. In other words, what third year students need to know about finding conference and journal articles (and other stuff too) for their assignments and projects. You can find my notes here, in the lecture 1 section.

In the second session, which I gave yesterday, he basically let me talk about anything that interested me. So, of course, I talked about Open Science. Here are the slides I used, heavily based on the talk I gave at Brock for Open Access Week a little while ago.

I tried to emphasize demoing the projects as much as possible rather than just talking about them. I also emphasized the Polymath-type projects more than in the previous talk -- a strategy suggested by Michael Nielsen in an email exchange.

How was the reaction? A little stunned, I think, perhaps because I covered a lot of ground in a short period of time, from the state of scholarly publishing to blogging networks. But overall, I did seem to have their attention so that's a good thing.

I'm giving this talk again to first year Computer Science students in January so I have another kick at the can to get it right. I think I'll pare it down quite a bit and try and talk in greater detail about fewer concepts as well as integrating my overview with the detailed case studies a bit better. Any suggestions would be appreciated.

And once again, thanks to Anestis for giving me this great opportunity.

No responses yet

From the Grace Hopper Celebration of Women in Computing

This year's Grace Hopper Celebration of Women in Computing took place this past week in Atlanta, GA.

I thought I'd gather together some small part of the blog posts I've been seeing floating around the Internets on this wonderful event.

Most of the blogs I link to have made multiple posts about the GHC -- poke around and check those out too.

The conference is on Twitter here and this year's hashtag is #ghc10 and the conference blog aggregation page is here.

It's definitely a conference I'd love to get to one of these days!

If I've missed any good posts, please leave the links in the comments.

No responses yet

Does Computer Science Have a Culture?

Sep 09 2010 Published by under computer science, culture of science, education

That's the question Eugene Wallingford asks in a recent post at his blog, Knowing and Doing.

If you studied computer science, did your undergrad alma mater or your graduate school have a CS culture? Did any of your professors offer a coherent picture of CS as a serious intellectual discipline, worthy of study independent of specific technologies and languages?

In graduate school, my advisor and I talked philosophically about CS, artificial intelligence, and knowledge in a way that stoked my interest in computing as a coherent discipline. A few of my colleagues shared our interests, but many of fellow graduate students were more interested in specific problems and solutions. They viewed our philosophical explorations as diversions from the main attraction.

Unfortunately, when I look around at undergrad CS programs, I rarely see a CS culture. This true of what I see at my own university, at my friends' schools, and at schools I encounter professionally. Some programs do better than others, but most of us could do better. Some of our students would appreciate the intellectual challenge that is computer science beyond installing the latest version of Linux or making Eclipse work with SVN.

Is CS a hollow shell of a discipline, a discipline with no overarching philosophy or narrative, no deep mysteries to plumb? Do physics and chemistry and math and biology all have these coherent narrative structures, a list of great accomplishments and even greater unsolved mysteries, a "way of doing things" that CS lacks?

Part of what makes it hard to judge is that CS is such a new discipline that the cultural perspectives it has carried over from it's parent disciplines -- electrical engineering and math, primarily -- obscure what's unique. And that tension between science and engineering pervades computing at every level. Perhaps it's unfair to only compare CS to science disciplines, maybe the culture of engineering needs to be thrown in there as well. As since there's so much business and organizational computing that gets done too, maybe there's an applications-oriented almost-business culture that seeps in at the edges as well.

Of course, there's no real, definitive answer to the question, only approximations. At some point, a wave function may collapse and we'll be able to observe a final answer. But not yet, I think.

So, what do you think? Does computer science have a culture? If so, what are the beliefs and behaviours that underpin it? What are its shared attitudes, values, goals, and practices?

(But what got Eugene thinking about this in the first place? It was Zed A. Shaw's post Go To University, Not For CS. It's not directly related to the direction I've taken here still very interesting. It's more about CS supposedly having a shallow intellectual/scientific culture.)

7 responses so far

« Newer posts Older posts »