'In programming, it is also common to refer to the NIH Syndrome as the tendency towards reinventing the wheel (reimplementing something that is already available) based on the flawed belief that in-house developments are inherently better suited, more secure or more controlled than existing implementations. This argument is accepted as flawed because wide usage is much more likely to uncover any existing defects than reimplementation. Even more, peer review of source code in the case of a Free Software or Open Source alternative tends to follow Linus' Law: "given enough eyeballs, all bugs are shallow"'
Not Invented Here
Feb 28, 2010
Feb 23, 2010
Patent System
HungryHobo's comment on /.:
Without patents:
1: I write some nice software and sell it.
2a: I make a little money, not enough to quit my day job.
2b: I don't make money, all I've lost is time.
With patents:
1: I try to research previous patents, they're almost unreadable..... I have no money to hire a patent lawyer(barrier to entry one)... so I can't be certain if my idea has already been patented.
2a: I stop for fear of infringing on someones patent and being sued into the ground.(barrier to entry 2)
2b: I keep going and write my app... it might be infringing but I don't think it is....
3a: I make a little money.
3b: I make no money.
4: Someone sues me.
5a: It is infringing- well they pull out records that yes I did view their patent in the course of my research in step 1 and obviously stole their idea. They get tripple damages I lose my house. (barrier to entry 3)
5b: It is not infringing - so what. I don't have the money for a good lawyer, they win I lose my house.(barrier to entry 4)
5c: It is not infringing - by some miracle I win.... I'm still left with a pile of legal bills and I lose my house.(barrier to entry 5)
In theory the patent system could help me by letting me be just like the guys who sue in the above but I don't have the thousands of dollars it takes to get a patent through nor the time.
Without patents:
1: I write some nice software and sell it.
2a: I make a little money, not enough to quit my day job.
2b: I don't make money, all I've lost is time.
With patents:
1: I try to research previous patents, they're almost unreadable..... I have no money to hire a patent lawyer(barrier to entry one)... so I can't be certain if my idea has already been patented.
2a: I stop for fear of infringing on someones patent and being sued into the ground.(barrier to entry 2)
2b: I keep going and write my app... it might be infringing but I don't think it is....
3a: I make a little money.
3b: I make no money.
4: Someone sues me.
5a: It is infringing- well they pull out records that yes I did view their patent in the course of my research in step 1 and obviously stole their idea. They get tripple damages I lose my house. (barrier to entry 3)
5b: It is not infringing - so what. I don't have the money for a good lawyer, they win I lose my house.(barrier to entry 4)
5c: It is not infringing - by some miracle I win.... I'm still left with a pile of legal bills and I lose my house.(barrier to entry 5)
In theory the patent system could help me by letting me be just like the guys who sue in the above but I don't have the thousands of dollars it takes to get a patent through nor the time.
Jan 5, 2010
Random thoughts on AI
I'm always pessimistic about AI.
Our current computation model is deterministic. Do you remember how Dijkstra 'proves' goto statement is harmful? He use one (or serveral) natural number(s) to represent the state of your process. We can map serveral natural numbers to a rational number, so in fact we can represent any state of any process as a rational number. A Turing Machine program has only two possible results: either terminate in finite time, like finite numbers, or trap in a infinite loop, like repeating decimals. Lambda calculus is proven to be a equivalence of Turing Machine.
But our brain is non-deterministic. I have a strong feeling that you can't describe the state of brain by a rational number, but a irrational number. You can predict what's the n-th digit of a rational number, while you can't predict the n-th digit of a irrational number, you won't know it until your computation reached that point.
The problem is that rational numbers live in a closure. You can't get an irrational number by applying basic arithmetic on rational number. If we can't get irrational number from rational number, can we get a brain from Turing machine? Maybe, if we find which state does PI represent of in a program.
The foundation of our world is non-deterministic. It seems easy to build deterministic base on non-deterministic, but hard in reverse. So I guess it will be hard to build a brain base on Turing Machine. Fortunately we have a new hope at our age, named Quantum Computing, which is based on non-deterministic mechanism. I know little about how it works, but it looks totally different from old fashion computation models.
Our current computation model is deterministic. Do you remember how Dijkstra 'proves' goto statement is harmful? He use one (or serveral) natural number(s) to represent the state of your process. We can map serveral natural numbers to a rational number, so in fact we can represent any state of any process as a rational number. A Turing Machine program has only two possible results: either terminate in finite time, like finite numbers, or trap in a infinite loop, like repeating decimals. Lambda calculus is proven to be a equivalence of Turing Machine.
But our brain is non-deterministic. I have a strong feeling that you can't describe the state of brain by a rational number, but a irrational number. You can predict what's the n-th digit of a rational number, while you can't predict the n-th digit of a irrational number, you won't know it until your computation reached that point.
The problem is that rational numbers live in a closure. You can't get an irrational number by applying basic arithmetic on rational number. If we can't get irrational number from rational number, can we get a brain from Turing machine? Maybe, if we find which state does PI represent of in a program.
The foundation of our world is non-deterministic. It seems easy to build deterministic base on non-deterministic, but hard in reverse. So I guess it will be hard to build a brain base on Turing Machine. Fortunately we have a new hope at our age, named Quantum Computing, which is based on non-deterministic mechanism. I know little about how it works, but it looks totally different from old fashion computation models.
Nov 26, 2009
CS Education
"Programming is usually taught by examples. Experience shows that the success of a programming course critically depends on the choice of these examples. Unfortunately, they are too often selected with the prime intent to demonstrate what a computer can do. Instead, a main criterion for selection should be their suitability to exhibit certain widely applicable techniques. Furthermore, examples of programs are commonly presented as finished "products" followed by explanations of their purpose and their linguistic details. But active programming consists of the design of new programs, rather than contemplation of old programs. As a consequence of these teaching methods, the student obtains the impression that programming consists mainly of mastering a language (with all the peculiarities and intricacies so abundant in modern PL's) and relying on one's intuition to somehow transform ideas into finished programs. Clearly, programming courses should teach methods of design and construction, and the selected examples should be such that a gradual development can be nicely demonstrated. "
- "Program Development by Stepwise Refinement", Niklaus Wirth, 1995
- "Program Development by Stepwise Refinement", Niklaus Wirth, 1995
Nov 24, 2009
Browserless Web Development
note: "Web development" in this article doesn't include UI design/implementation, which means all (backend, database, html etc.) except css/javascript.
I just found a new measure on code/web-developer recently. A traditional web development loop may look like this:
An obvious problem here is, there's no room left for *automated* test. You may write *automated* test in step 2, but no one force you to do it. A better process (I think) should be like this:
So we change step 3 only, removed browser from our process. *Automated* test become an explicit step here. You can switch step 2/3 in the latter process, so you'll write test first in that case, that's not the point here so I left them unchanged. The point is if you don't have a browser at hand you'll be forced to test your code by writing code, which is automated, reusable and cool. You'll find you're working in TDD style naturally even you don't know what TDD is.
That's what I called Browserless Web Development. The less a web developer use browser to validate his work, the better he and his code are.
I just found a new measure on code/web-developer recently. A traditional web development loop may look like this:
- read new feature/story
- code
- try it in browser, if there's any problem, goto 2 (goto considered useful here)
- commit your code
An obvious problem here is, there's no room left for *automated* test. You may write *automated* test in step 2, but no one force you to do it. A better process (I think) should be like this:
- read new feature/story
- code
- write a piece of code to test code in step 2, if there's any problem, goto 2
- commit your code
So we change step 3 only, removed browser from our process. *Automated* test become an explicit step here. You can switch step 2/3 in the latter process, so you'll write test first in that case, that's not the point here so I left them unchanged. The point is if you don't have a browser at hand you'll be forced to test your code by writing code, which is automated, reusable and cool. You'll find you're working in TDD style naturally even you don't know what TDD is.
That's what I called Browserless Web Development. The less a web developer use browser to validate his work, the better he and his code are.
Oct 26, 2009
Software Engineering is ...
You know, Dijkstra is really awesome.
"Ours is the task to remember (and to remind) that, in what is now called “software engineering”, not a single sound engineering principle is involved. (On the contrary: its spokesmen take the trouble of arguing the irrelevance of the engineering principles known.) Software Engineering as it is today is just humbug; from an academic —i.e. scientific and educational— point of view it is a sham, a fraud."
"Universities are always faced with something of a problem when there is a marked discrepancy between what society asks for and what society needs. In our case the need is clear: the professional competence of the Mathematical Engineer, familiar with discrete systems design and knowing how to use formal techniques for preventing unmastered complexity from creeping in. But said war “out there” all but prevents this need from being perceived, and what our immediate industrial environment overwhelmingly seems to ask for is different brands of snake oil, Software Engineering, of course, being one of them. And as, with the recession lasting longer and longer, the external pressures on the Universities to do the wrong things only mount, it is only to be expected that part of the campus is going to be included in the battlefield."
"The task of the first-class University, however, is absolutely clear. Industry being its customer, consultancy must tell industry what it wants to hear; it is the task of the first-class University to tell industry what it does not want to hear, whereby it is the rĂ´le of its scientific authority to ensure that the sting of the academic gadfly really hurts."
-- Edsger W. Dijkstra, http://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/EWD1165.html
Update:
It's interesting just after I read Dijkstra's article, Joel Spolsky published a new blog post "Capstone projects and time management", to some extent on the opposite side of Dijkstra. Joel wrote lots with wisdom, but this new post just looks like an april fool's joke. A smart guy wrote a perfect answer to Joel already.
"Ours is the task to remember (and to remind) that, in what is now called “software engineering”, not a single sound engineering principle is involved. (On the contrary: its spokesmen take the trouble of arguing the irrelevance of the engineering principles known.) Software Engineering as it is today is just humbug; from an academic —i.e. scientific and educational— point of view it is a sham, a fraud."
"Universities are always faced with something of a problem when there is a marked discrepancy between what society asks for and what society needs. In our case the need is clear: the professional competence of the Mathematical Engineer, familiar with discrete systems design and knowing how to use formal techniques for preventing unmastered complexity from creeping in. But said war “out there” all but prevents this need from being perceived, and what our immediate industrial environment overwhelmingly seems to ask for is different brands of snake oil, Software Engineering, of course, being one of them. And as, with the recession lasting longer and longer, the external pressures on the Universities to do the wrong things only mount, it is only to be expected that part of the campus is going to be included in the battlefield."
"The task of the first-class University, however, is absolutely clear. Industry being its customer, consultancy must tell industry what it wants to hear; it is the task of the first-class University to tell industry what it does not want to hear, whereby it is the rĂ´le of its scientific authority to ensure that the sting of the academic gadfly really hurts."
-- Edsger W. Dijkstra, http://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/EWD1165.html
Update:
It's interesting just after I read Dijkstra's article, Joel Spolsky published a new blog post "Capstone projects and time management", to some extent on the opposite side of Dijkstra. Joel wrote lots with wisdom, but this new post just looks like an april fool's joke. A smart guy wrote a perfect answer to Joel already.
Oct 9, 2009
The Correct Refactor Flow
Read this enlightening piece here.
- Get assigned a task to implement a new feature.
- Refactor the code until that feature is as easy to add as possible.
- Add the feature.
- Submit.
update: I found this is indeed originated from Martin Fowler's amazing book "Refactoring", which is filled with ideas originated from practices of smalltalk. It's a shame I didn't read that book earlier :-( "Refactoring" (or "Refactoring: Ruby Edition") is a must read.
Subscribe to:
Posts (Atom)