Shared publicly  - 
 
A few years ago I saw this page: http://www.csis.pace.edu/~bergin/patterns/ppoop.html 

Local discussion focused on figuring out whether this was a joke or not. For a while, we felt it had to be even though we knew it wasn't. Today I'm willing to admit the authors believe what is written there. They are sincere.

But... I'd call myself a hacker, at least in their terminology, yet my solution isn't there. Just search a small table! No objects required. Trivial design, easy to extend, and cleaner than anything they present. Their "hacker solution" is clumsy and verbose. Everything else on this page seems either crazy or willfully obtuse. The lesson drawn at the end feels like misguided epistemology, not technological insight.

It has become clear that OO zealots are afraid of data. They prefer statements or constructors to initialized tables. They won't write table-driven tests. Why is this? What mindset makes a multilevel type hierarchy with layered abstractions better than searching a three-line table? I once heard someone say he felt his job was to remove all while loops from everyone's code, replacing them with object stuff. Wat?

But there's good news. The era of hierarchy-driven, keyword-heavy, colored-ribbons-in-your-textook orthodoxy seems past its peak. More people are talking about composition being a better design principle than inheritance. And there are even some willing to point at the naked emperor; see http://prog21.dadgum.com/156.html for example. There are others. Or perhaps it's just that the old guard is reasserting itself.

Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it's not always the best idea. And it is not well served by the epistemology heaped upon it.

Sometimes data is just data and functions are just functions.
706
298
Henrik Johansson's profile photoNicholas Haggin's profile photoAngelo Thoma's profile photoMorris Mwanga's profile photo
92 comments
 
oo tools can be useful in the hands of an experienced coder who can resist the urge to try modeling the universe, even when the tool seems to encourage this
Alex P
+
1
2
1
 
+brad clawsie why should you model the universe, not the domain, nor the company, at most?
 
The link in surely a joke! File name is p poop .html, assume, the first p is for programming?
 
My contention is to use OO when the goal calls for it.  I could accomplish the goals of that effort in a few lines of Bash and be done with it.
 
If OO zealots are afraid of data, then the impenetrability of badly-written OO code further validates Fred Brooks: "Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they'll be obvious.''
 
posible OOP solution (wich best in OOP I not compare it with data-driven table) is extending os object with .ToStingMemo wich just out needed string  - in man it just Os.ToStringMemo , and for new os we just add definition of toStringMoMo(method  or readonly fuild which init on construstion of Oschild object :) )
 
Those examples are nothing! I swear they look nice and clean compared to what goes for industry standard in the Java world.
I can't wait for it to go away... Simplicity can not be a dying virtue simply because the piper will get payed and more and more people are starting to get that. 
Antoine M
+
1
8
9
8
 
There is a lot of overdesigned unreadable oo code out there, maybe as much as overspaghettied unreadable procedural code. It's more a problem of skill and education of developers able to do the simplest thing that works than a problem of paradigm or language, imho.
 
They are trying to model a sophisticated concept with a trivial example.

They are trying to show how to manage complexity of a large system using design patterns.  This type of example doesn't scale down well, as Rob has expressed.  A simple table and loop and you're done.

I admire the teachers for attempting to model this, but the example doesn't do the concepts justice.  Hopefully this is a trivial example given in a larger context, such as a course project (with enough complexity to warrant the students using the concepts).
 
Interesting. I would have taken the "Hacker" solution and stripped out the "public class PrintOS" and "public static void main(final String[] args)" -- why even declare a class here? It doesn't seem called for. Yet +Rob Pike apparently has me beat -- his "search a small table" solution would be even simpler.
 
Oh dear, what has the world come to? The problem would be solved much more simply using a map. Perhaps the reason this wasn't used was because maps are complicated in Java.

In Go: http://pastebin.com/SWgrx9nE
 
Now, where the problem would get truly interesting is if the problem called for real polymorphism, not just printing a different string. Like, on different OS's, maybe you need to make different OS API calls with different parameters. In that case you'd need different code, not just different data.
 
I'm loving all of these anti-OO posts lately...  It's usually a drastic over-complication and a departure from actual programming.  Last year, I printed out a file listing for a co-workers web application and used it to  wallpaper my cube...  14 pages, for a really simple concept.  He also used 300% of the budget.  Sad, really, because any decent programmer should have been able to complete it on time.  

Almost every OO project I see goes over, and almost all non-OO projects get done on time.  Funny, that.
 
ppoop.html
That's the first thing I noticed 
 
People usually ends up in abstracting a simple entity into single-child families, complicating things with the dream of usability.
Bob Wyman
+
2
0
1
0
 
I'm always amazed to find how few people realize that OO programming really is "nothing more than programming using data with associated behaviors." Folk seem to assume that it is much, much more than that.

I remember, for instance, many who complained that "obviously" the people at Microsoft who designed COM objects didn't understand OO programming since COM only supported composition and didn't support inheritance. What they didn't realize is that we actually understood inheritance quite well but we also understood that inheritance drastically increases the complexity of maintaining and understanding systems built from reusable components. Certainly we used inheritance at lower levels to construct objects whose interfaces were exposed using COM, but COM itself only supported composition in order to offer object oriented components that had immutable (fixed) interfaces. There was nothing in this that was in any way "anti-OO." I believe that this simplification of the component model was one of the key reasons for COM's massive success.

Before people complain about OO programming, they should be sure to understand not only what OO means but what it doesn't mean.
 
I'm not convinced that statically coupling transforms to source data is ever a better idea than bringing to bear a reasonable stable of reusable transforms on application specific data. For one thing, this coupling implies that the transforms are always valid, even when they are not. Furthermore, the coupling distorts what should be a far more fluid concern of the addressability of data - there is no reason why selecting an object from a heap (using a reference) should be any different than selecting part of an object (accessing a member).
 
A friend and colleague Ben, introduced me to the phrase: "Object- disoriented Programming". I'm not sure where it comes from, but I have seen that far too often, it is an apt description.
 
I've noticed with other developers who work in OO languages lately that "pattern language" has invaded almost every design discussion.  I love re-usable components, but this top down design starting with patterns ignores the original purpose; patterns are emergent behavior from your software based on the problem domain and isolating changing behaviors.
 
Original paper needs dependency injection.
 
+Bob Wyman I remember the day I finally grasped the concept where I was like "wait, so it's just.. a structure with its own variables and functions".
 
For me, software development is all about exploring the problem, and trying to write the solution in as flat and declarative a manner as possible. I love tables of parameters. Love 'em. Keeps everything together. Fewer files to open, less scrolling to do.

OTOH OO always seemed to me to be more of a mindset than a technology:- A belief that the structure of the code should mirror the structure of nature; that in some way classes represented inviolable natural truths, and the relationships between them encoded something transcendental.

Which is all nice and lovely when it all works out properly. (But how often does that happen, really?)
 
The Age of the C Coder and OOP desktop application developer is over....long live the Web !!!!!!
 
Absolute programming paradigms are in the eyes of the beholders.. (including OOP)...
 
I don't necessarily disagree with your conclusion, but I always imagined the essence of OOP as "transferring the responsibility for implementations details to the receiver of a function|method call, rather than the sender of a call."
 
C is still a very important language to know. :)
 
+Bob Wyman, I always thought COM's separation of interface from implementation was a good invention (well, I don't know if it was first invented with COM, but that's where I first encountered it), and in fact many OO languages today let you declare interfaces explicitly within the language and then indicate that a class in implementing that interface.

It makes sense when you view objects as useful for polymorphism. My issue with OO programming is the notion that "everything" needs to be an object; that absolutely everything in an entire program must be "object oriented". I think that's different from just saying, ok, there is this feature in programming languages that lets you declare a class, and that's useful in specific situations, such as when you need polymorphism, and so on.
 
Hear, hear!  OOP is a huge load of nonsense, and is probably one of the greatest mistakes ever made in CS.  As you say, the main idea is utterly simple.  In my PFPL book I describe it as row-major vs column-major order for a matrix of methods vs classes.  You can define the method once for all classes, or you can define the class once for all methods.  These are isomorphic representations, so you can switch between them at will.  BFD.
 
+Robert Harper - Well, I tend to write in a slightly more functional style (as it suits the sorts of problems that I face), but I have always seen that there are situations where OOP is the best, most appropriate approach.

It is a question of horses-for-courses.

Different types of problem tend to attract different intellectual approaches, which, in turn, are better served by different development paradigms.

It is a sign of the increasing maturity of Software Engineering as a discipline that we have stopped (largely) looking for the proverbial silver bullet, and have started to divide our discussion into sub-disciplines; different communities developing techniques, approaches and technologies that work best for different types of problem.

Bringing the notion of "The best tool for the job" up to the next level...
 
Well, it's fair.  A table will work in that example, but I think the real issue is 'pedagogical challenge.'  

Whenever you want to teach something you need to find an example that people can understand at a glance, and yet not dismiss immediately as too simple for the techniques you want to highlight.   The balance is never "right" so examples like those in the pdf are easy fodder.
 
April fools - "... The discussion had been going on for about 36 hours in late March and early April 2000 ..."
 
Yea agree with comments above. I would just add that OOP libraries need to die a quick death. We have these Gigantic OOP and CRUD libraries and huge APIs floating around now that people just drag into projects thinking they will use all that crap. The Library Model is a relic of Windows C Programming that never felt right in the Web world. There have been just as many performamce problems with OOP as with scripted frameworks. Trust me.
 
+Mitch Stokely - Could you elaborate a little bit on what you consider the library model to be?, as well as why you think it is a bad idea? I would be really interested to hear why you think so.
 
I think OOP for your Data is indeed debatable, sometimes it's good, many times it's bad, most times it depends on the designer. But for Data Structures and APIs, it's really useful. Java and Scala's collection hierarchy is much more useful using OO principles, especially in Scala where I think the type hierarchy is very smart. OOP is here to stay, just like FP is here to stay (glad it's back). But also people abusing OOP (and FP) is (sadly) here to stay
 
+brad clawsie Code maintainability and code testing demands that even the smallest unit of code be as modular as possible. OOP is good (very good), but most people don't like the cost associated with it. If I could afford to remodel the universe every time, I would.
 
+Morris Mwanga Code maintainability is driven by many antagonistic pressures, and it is not always obvious where the right balance is to be found.

For example, readability & pedagogical concerns normally favor flat, straight through code, with lots of work being done locally. On the other hand, flexibility and modularity favor the breakup of the behavior/functionality into smaller pieces, at the cost of scattering the logic about at great cost to discoverability/readability.

The right balance often depends upon the skill and personality mix of the development team as much as it does on more intellectually involved (and therefore politically acceptable) considerations.

If you have a lot of junior developers coming through the organization, you probably want to make the code more readable, since it is more likely that somebody with limited knowledge of the architecture will be trying to figure it all out by reading the code. On the other hand, if the product is being maintained by one or two "old hands", who already know the architecture front-to-back and inside-out, then highly modular "spaghetti inheritance" is probably better, because everybody who matters already knows where everything is.

More and more is it becoming clear that software engineering is about people and personalities far more than about mathematics and computer science; paradigms and methodologies.
 
Ironically, if you think about it, the third solution relies on a table lookup:
storage = new java.util.HashMap()
...
storage.get(System.getProperty("os.name")).

"Hacker Solution ... While this solves the problem, it would not be easy to modify in the future if the problem changes"
1. I think on the contrary their 'hacker solution' is the easiest one to modify (for example, into the third solution if new requirements justify it), since it's short, compact and easy to understand.
2. It's not like there are dozens of new major operating systems coming out every month...

By the way I've fallen in love with data-driven approach too in the last couple of years.
Maybe it's because of the specifics of the field I work in (financial transaction processing), but I find that it works really well.
I can usally encode what needs to be done into a table, like say pad some fields with whitespace or zeros, or translate from one format into another...
{ "field1", rpad, ' ', 33 },
{ "field2", lpad, '0', 22 },
etc.
Then I go through the table in a loop, basically, executing it.
The good thing about it is that the code part is really, really small (so less places for bugs), simple and compact. And when you look at the table you basically see the algorithm without any of the syntax cruft that would be there if you were to straighforwardly code up the actions in the table.
 
If you have to scatter the logic all around you aren't factoring your classes properly. A little more design time more than pays for itself in maintainability and flexibility of existing code down the road. I liken it to the rule of thumb when drawing or painting. "Spend well over half of your time observing, and less than that making marks". In the 20 or so odd years I've been doing OOD (38 years of coding total), I've found that to be a winner on every project.
 
+William Wells  - OK, I may have been a bit flippant in my choice of words, and I also agree strongly with your sentiment RE design time. I am also prepared to believe that an exceptional design makes the fundamental truths inherent in the system so manifest that all who glance at the source immediately gain enlightenment.

However, in my experience, virtually all systems (even those produced by some fantastically talented and clever people) are far from a paragon of virtue when it comes to design, or even documentation.

My grandma always said something about white gloves and mud ... the gist of it being that virtue is rare, and mediocrity is common. Over time, most systems will be handled by more mediocre men than brilliant ones... and that mediocrity will tend to rub off.
 
Its very true, +William Payne , that many hack coders exist. Fortunately I'm only responsible for the way I code. I confess to being one of those people who views software design as an art form. Sometimes its a creation art, where the final product stands on it's own merits, and sometimes its a performance art, where my efforts are bounded by time constraints. Still, what keeps me interested in doing this is the challenge of doing my best work on a repeating daily basis. I wouldn't code another 10 minutes if I was just turning out crap with commas. If I start doing that, it will be time to go sell tacos at the beach. :)
 
Don't be so hard on hack coders.  It's the best I can do and I've sworn not to deploy into aircraft or life support systems!!!  ;-)
 
+William Wells - Well, as much as I might sometimes like to think that I am something special, I am probably one of the mediocre developers, in the grand scheme of things. Which is not to say that I do not take pride in my chosen craft, because I most certainly do, but rather that one becomes increasingly aware over the years just how little we (as a profession) have things figured out. That and my perpetual state of confusion and bewilderment. :-)
 
The way I see it, the author tried to define an object. However, His solution seems nuts. What he wants to do is define what set an object belongs. Thus allowing the use of all members within a set to have a single response. Response being to run any program.

His method is nuts, simply define each set and within set object will be the response. So the statement would be something like call " set membership". For this object it would simply search for membership, typically using an object from the OS then print the object and response.

So this author had no real idea of using an Object oriented language. Unless he was trying define the language,which would be a joke.
 
To the people in this conversation just because u know big words not everybody else knows what half the words mean like bewilderment and no guy should use smile faces when posting comments
 
+Ashton Watkins Please accept my most sincere apologies for having offended your sensibilities. I really do hope that your evening was not completely ruined?
 
I can't wait for OOP/Java-hating to become beating a dead horse. GADTs are so much better. All hail MLs.
 
+Steven Harper, it's not a question of whether OOP "works", it's a question of whether it's optimal. Yes, OOP works in Android and iOS, but you don't have an equivalent to compare to that uses less OOP. If you did, you might be able to see which is clearer and more maintainable.
 
OOP promised reusable objects. After 15 years I have no reusable object stores to shop in.
 
+Steve Sampson in fairness you have something better....github. i recall pundits predicting a market for components (beans, com, etc) in the mid 90s, but then the web exploded and brought open source to the fore. i'd rather have source than opaque components, i would argue we are better off! 
 
"Every if and every switch should be viewed as a lost opportunity for dynamic polymorphism."

How far we've come.
 
I think overall the problem with the linked article isn't the defense of object oriented design. OOP can be VERY powerful. It's that the author decided to defend object oriented design with an example highlighting a Java interface used as a polymorphic construct, even though there is really nothing gained through the implementation. If the author wanted to highlight polymorphism in OOP I think there could be better ways. Anyone reading this webpage should be able to follow a more complicated example.
 
Expositions of functional programming have their own version of the disease exhibited in this link, which is the typical sequence of ever-more-obtuse ways of writing Factorial or Fibbonacci....
 
Wow i went to pace and studies under professor Bergin he is a brilliant man learnt a lot of good coding practices from his class where we built a compiler for coco
 
Many of the original proponents of OOP meant it as a way to capture (and encapsulate) the domain complexity. This often gets lost in discussions of particular language features, and certainly objects are not the only way to model the domain, but they have merit.

To give just one example why, consider the RGB color example given by the dadgum link. It considers creating a color class overblown. Well, then tell me how your RGB looks: integer or floating-point, 8bit or 16bit per color (for integer), RGB or BGR order in the tuple, which color model are the values measured from? Those are just the most obviously relevant design decisions, and they are i) not answered by "just use a tuple", and b) a least some of the choices made (e.g., order) are not immediately obvious when you are just given a tuple. So, some form of encapsulating structure around the underlying tuple makes sense. Hence, objects.

Unfortunately, modeling problem domains is hard, and hence, many naive object models are bad. Blaming this on the modeling approach is unwarranted.
 
+Robert Harper IMO, you are wrong. There's no fixed matrix here. What we have is an extensible matrix. We can extend the matrix by adding new classes with function implementations in them, but we can't add new function parts which work with new classes in existing languages. So, despite all the problems, OOP is the best available way of creating extensible systems.

P.S. OOP isn't part of CS it's part of SE.
 
This might be a classical case of the naked emperor here. Nearly everyone here agrees with Rob Pike, and their own reasons are nearly mediocre. Fact is, OOP is a good thing, plain and simple.
 
searching a table, as in something like this: osList = ["linux", "windows", "unix"]; os = osList[getOSName()]; If so, then some one had me terribly confused. I thought that was a good practice for avoiding long drawn out if statements. And having read the article, I'm at a loss to see how the end solution is better than the "hacker, besides being slightly more maintainable (maybe, with all those files, you could lose something). I came away wondering what problem they were trying to solve. If it was a simple as determining the os, then the first was the best, because it was the shortest.
 
"The discussion had been going on for about 36 hours in late March and early April 2000" i.e. April 1st before noon - April Fool anyone?
 
No syntax you could ever come up with will ever be anything but inferior to pure data. #lisp
 
Let me be definitive: the article had a serious point, but a lot of humour, even some satire.  What is being missed by all the comments here is that the document was an attempt to explain something about 1999 object-oriented approaches. As with all educational devices of this sort it suffers from lack of realism when poked hard. Moreover it is now 13 years on and the world, especially the object-oriented world has moved on. To assess this document with 2012 eyes is to do the document a disservice. It has to be viewed with 1999 eyes.

There are many, many problems with the object-oriented approach even today, being afraid of data is not one of them. Java has problems being object oriented, cf. the bean protocol, it is too data-oriented.  C++ has recognized this problem and is trying to avoid being labelled object-oriented.

Using tables is an integral part of real programs of any paradigm, object-oriented or otherwise. I would happily use a dictionary in Python, or a Map in Java to solve the problem if I was doing it for real. To attack an educational artifice for being unrealistic feels a bit unfair. But I appreciate that sometimes using effectively artificial solutions can be dangerous. If I were writing the article today, it would be very, very different to this one. But I have already made the 1999 → 2012 point.

It is nice to know though that people have actually read the article, though clearly not everyone who has responded to this entry has actually done that. I am pleased the adjective "sincere" has been used here, the article was a serious one, at the time.
 
Russel, I agree with you this was a sincere article. I disagree that this is a "dated" view of OO. Lisp was invented in 1958. Modern Lisps can run rings around most of what people call Object-Oriented in terms of managing complexity, having programs that are easier to reason about and taking advantage of modern, multicore hardware (not to mention being OO!) And to Smalltalk developers, all the criticisms of OO today still held true in 1999 for them: Most languages that are considered to be OO are far less capable than Smalltalk's vision of what it means to be Object-Oriented.
 
+Konstantin Solomatov Sorry, it's you who are wrong.  Of course you can build another matrix, extending it with either new columns or new rows.  The standard representations are isomorphic, so obviously either is replaceable by the other at will.  It is indeed problematic that SE thinks it is somehow separate from CS.  It shows painfully.
 
Nick's point about lisp and smalltalk is extremely important, both with respect to OOD and OOP in general, as well as with respect to the fact that from an oo-lisp or, especially, Smalltalk programmer's point of view, the criticisms against mis-understood ideas about OO apply as equally to anything like the 1999 article as they do to most anything written today about "OO" in languages other than Smalltalk and maybe oo-lisp.

I find Rob's succinct little statement about OOP being "nothing more than programming using data with associated behaviors" to be a little limiting as it fails to fully encapsulate the power of being able to build all of Smalltalk-80 entirely (language syntax and all) up from one generic concept of an Object; and I think on the surface at least it fails to include the critical concept of communicating between objects.

The really sad thing about Bergin and Winder's article is the choice of Java as the language in which they attempt to embody their description of OOP.  It's such an overly verbose language, with soooo many flawed analogies embedded too deeply within it.

I think Harold Abelson and the Sussman's description (in their book "Structure and Interpretation of Computer Programs") of "object oriented" computation, and their contrasting of it to "stream oriented" processing, is the most succinct and language-agnostic description of these things I've ever read.  Their introduction of assignment, "pointers" (v.s. direct bindings), and the Environment Model of evaluation give one a new view of what "object oriented" implies with respect to computational modelling, and they incorporate message passing to show how an object can be told which behaviour to exhibit.
I think Smalltalk though makes it easier for a novice to approach the concepts of object oriented computation, and Smalltalk adds a decently clean embodiment of inheritance not shown by Abelson et al, but either is better than Java (or C++, or anything else similar).
 
Greg, exactly! I have been grinding my way through SICP and converting interesting exercises into Clojure just to see how it plays out (Huffman trees are a great place to start) and find that damn, that language is good. So clear, so concise, so easy to reason about. My experience with Smalltalk has shown that the language expression is able to be pithy without being terse, clear without being hand-waving: knowing the difference between avoiding complexity and merely hiding complexity like, say, Java does.

I firmly believe 1958 was a banner year and that Alan Kay was really onto something. I just don't think most languages that came after have lived up to the promise. Which, in the end, is what Pike is complaining about, after all.
 
A couple of perhaps relevant quotes from Kay himself (who claims to have invented the phrase "object oriented programming", and says he didn't have C++ in mind at the time either):

``OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.  It can be done in Smalltalk and in LISP.  There are possibly other systems in which this is possible, but I'm not aware of them''
        --Alan Kay

``So the problem is -- I've said this about both Smalltalk and Lisp -- they tend to eat their young.  What I mean is that both Lisp and Smalltalk are really fabulous vehicles, because they have a meta-system.  They have so many ways of dealing with problems that the early-binding languages don't have, that it's very, very difficult for people who like LISP or Smalltalk to imagine anything else.''
        --Alan Kay

There's yet one more thing that's missed by those quotes, but which I think Rob did at least hints at in his comments about inheritance, and others since have been more adamant about in comments above, is the issue of complexity.
 
I'm not sure about OO dialects of lisp and their common programming environments, but at least in Smalltalk-80 the complexity issue is dealt with directly in the programming environment which is for all serious intents and purposes inseparable from the language.  Without the ability to immediately and directly discover and see exactly which method (function, behaviour) will be invoked by sending a given message to a given object (or class of objects) makes OOP with inheritance practically, at least for me, impossible.

Jonathan Rees identifies some of the other issues with OOP and OOD in the following post (see the last part particularly, beginning in the paragraph where he says "occasional OO is fine):

http://www.eros-os.org/pipermail/e-lang/2001-October/005852.html

Paul Graham distils these reasons further:

http://www.paulgraham.com/noop.html
 
"Sometimes data is just data and functions are just functions."
I find that using CLOS, where generic functions are just functions that specialise on types, and classes are like structures that don't /own/ their methods per se, is a nice way of keeping things relatively simple and flexible while indulging in OOP.
Lisp's flexibility also shows up when judicious use of dynamic scope can all but demolish the need for dependency injection frameworks.
+Nick Bauman makes some solid points.
 
+brad clawsie Open Source has a few success stories, but when I search the repositories, the term "field of broken dreams" comes to mind.  Half implemented and abandoned projects litter the Internet.  It still looks like the monolith solution is supreme :-)
 
Does anyone else see a parallel between the progression from the "hacker code" to full-on polymorphic OOP and the growth of startups into big corporate behemoths? OOP has always struck me as some sort of suit with an MBA coming in and telling how we need a bigger accounting department, more middle management, and bigger-longer staff meetings. While in the beginning there was once a solid, undeniable solution to a problem, over time what emerges is an old heavy ship laden with the barnacles of risk-averse paper-pushers rife with extraneous pockets of self-declared importance.  
 
If I was writing this in c# I would just create a new Attribute with the OS name string to search, add that to a method that returns the OS name, and write a short program that uses reflection that meets all the criteria.  But if I could be reasonable about the solution, I'd just search a hashtable (Dictionary<string,string>)
Add a comment...