Google make yet another failed programming language

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 11, 2009 at 06:29

http://golang.org/

Language summarized:

  • Type-safe, but no generic programming (because that made early Java and C# soooo much fun to code in)

  • Similar syntax to C++/Java/C#, but just different enough to confuse you. e.g. flow statements are the same, but without the parentheses.

  • Maps are built in, but you cannot use structs or arrays as keys…

  • ++ and – are NOT expressions. So you can’t write i = j++;

  • No pointer arithmetic (but this is meant to be a systems programming language….)

In other words it’s just another failed language that has somehow managed to learn absolutely nothing from other languages. I really thought Google were better than this.

34 Replies

Please log in or register to post a reply.

Ceee4d1295c32a0c1c08a9eae8c9459d
0
v71 105 Nov 11, 2009 at 08:23

Do we really need another language , i mean , why don’t they improve c++ adding libraries to handle web programming and other sort of stuff ?
I agree that we never stop to learn, but why change the syntax ?
With c++ plus libraries you can do almost anything that java can, i recently came across the c++ web toolkit this library gives you functions much like java to handle web development.
Now let’s go on with the flame war
Ah no, wait , this is not gamedev

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 11, 2009 at 08:40

We do need new programming languages. C++ is fundamentally broken with its horrible syntax, especially with templates. D is a step in the right direction, but I don’t know what this Go is supposed to be. It really is just a stereotypical “hey look, I wrote a language too!” They haven’t added or improved anything – they’ve just mixed and matched features from other language with little rhyme or reason.

And yeah, I guess this isn’t gamedev, but this place is hardly bustling with activity at the moment, so a little discussion can’t hurt. If a mod feels this would be better in Off Topic then feel free to move it.

2b97deded6213469bcd87b65cce5d014
0
Mihail121 102 Nov 11, 2009 at 09:36

@poita

We do need new programming languages. C++ is fundamentally broken with its horrible syntax, especially with templates. D is a step in the right direction…

Big ‘YES”. C++ is indeed broken, although it’s powerful if YOU know what YOU are doing. C++ is strictly forbidden where I work except for the case you’ve wrote a portable library, verified and validated that compiles with any compiler so no one will ever have to look at the source. D… oh God, where do I start, it has a nice concept, smooth learning curve, it is compiled, but if you start comparing the productivity against C#, Java etc. it quickly comes out of the picture.

, but I don’t know what this Go is supposed to be. It really is just a stereotypical “hey look, I wrote a language too!” They haven’t added or improved anything – they’ve just mixed and matched features from other language with little rhyme or reason.

No, it’s not, Google does have a clear idea of what the language is supposed to do. Well, they claim stuff, but the language is in my eyes far from giving them so easily, as site claims. Good concurrency support? There is already a language for that and it’s name is Erlang. Compare Go to Erlang, please, and tell me what’s different?

724f55482b95a484f843181229edf5ff
0
StudioFortress 101 Nov 11, 2009 at 12:19

@Mihail121

No, it’s not, Google does have a clear idea of what the language is supposed to do. Well, they claim stuff, but the language is in my eyes far from giving them so easily, as site claims. Good concurrency support? There is already a language for that and it’s name is Erlang. Compare Go to Erlang, please, and tell me what’s different?

This is more of a dynamic vs static reason, but I would love to have full static typing support in Erlang. I often have bugs in my code where some process has passed the wrong value to another process, who then crashes some time later. It’s this delay which is the issue as it makes debugging the code more difficult.

Static type checking would find most of my bugs for me and at the point where they occur. Once found these are typically just a 1 line fix.

I personally find Erlang to be quite slow and it also doesn’t have any proper tool support. There are editor plugins for Eclipse and NetBeans and a bunch of different tools you can run but there is no do-all Erlang IDE. Most of the tools available are usually buggy, badly supported or both. It’s not in the same league as say the tools for C#, Java, C++, etc. If Google build some good tools for Go then I’d be happy to switch.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Nov 11, 2009 at 17:14

http://www.youtube.com/watch?v=rKnDgT73v8s

Is it a Pythonic C++, or did they jam C++ into Python? ;)

There’s a lot of experimentation going on with getting the right mix of various factors into a language that does concurrency well and with ease. Expect lots of languages to resurface (Erlang), and others to come and go (Go, Axum, etc.).

Yes, it may fail and there will be many more. Nothing wrong with it. The churn is inevitable with the void being created by the umpteen-core systems coming into the hands of all kinds of users.

The key is to not be a fool and invest too much time in any language that doesn’t have a full tool chain and a big community behind it…

A638aa42130293f319eda7fa4ba121f4
0
fireside 141 Nov 11, 2009 at 17:48

Ars Technica did an article on it also:

http://arstechnica.com/open-source/news/2009/11/go-new-open-source-programming-language-from-google.ars

I’d like to see some compiled language take off like this, nearly as fast as c with built in garbage collection, but dropping OO was a mistake in my mind. They say there are some work arounds but no sub-classing. I don’t know if it was for speed or what, but that would put a damper on design usability. I’m not sure I could even write a program that wasn’t OO anymore.

Ceee4d1295c32a0c1c08a9eae8c9459d
0
v71 105 Nov 12, 2009 at 00:19

So which features would you like in your ideal langauge ?

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 12, 2009 at 00:19

@fireside

Ars Technica did an article on it also:

http://arstechnica.com/open-source/news/2009/11/go-new-open-source-programming-language-from-google.ars I’d like to see some compiled language take off like this, nearly as fast as c with built in garbage collection, but dropping OO was a mistake in my mind. They say there are some work arounds but no sub-classing. I don’t know if it was for speed or what, but that would put a damper on design usability. I’m not sure I could even write a program that wasn’t OO anymore.

While I obviously dislike the language, I think dropping type hierarchies was a good idea.

They are absolutely right when they say that programmers spend too much time thinking about the correct tree structure to use, and the reason is simply because you cannot organize the world into a tree. The relationships between real-world concepts do not fit into a trees topology. Category theorists would be laughing their heads off if they knew what programming language designers had been doing for the past 20 years.

A638aa42130293f319eda7fa4ba121f4
0
fireside 141 Nov 12, 2009 at 00:57

Category theorists would be laughing their heads off if they knew what programming language designers had been doing for the past 20 years.

I find it useful and I think most people do, that’s why it’s so popular. It’s like arguing that shoes are useless or something. Everyone already wears shoes and they know they are useful. A theorist isn’t really necessary anymore because it’s used in practice in countless large programs. I use very few interfaces and that’s what they say is a replacement. Extending base classes is an excellent way of designing and enlarging programs.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 12, 2009 at 01:59

@fireside

I find it useful and I think most people do, that’s why it’s so popular. It’s like arguing that shoes are useless or something. Everyone already wears shoes and they know they are useful. A theorist isn’t really necessary anymore because it’s used in practice in countless large programs. I use very few interfaces and that’s what they say is a replacement. Extending base classes is an excellent way of designing and enlarging programs.

Most people find it useful because it IS useful, but only in a very limited capacity.

Try to write a large program using type hierarchies and you’ll see how problematic they are. Concepts simply do not fit a tree structure.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Nov 12, 2009 at 02:03

@fireside

dropping OO was a mistake in my mind

The issue is that OO and concurrent are orthogonal concepts. Many times, objects represent things or states that use/depend on/implement threading and other similar concurrent ideas.

To combine them is to increase the complexity of that language. That’s why you find most concurrent languages do not follow an OO paradigm. Some do, but most steer clear.
@poita

the reason is simply because you cannot organize the world into a tree.

While I do think that many developers see too much of the “Shape-Circle-Square” approach to architecting in an OO setting, it’s a stretch to sya that we should drop that as a tool available to developers. To use an analogy, if a carpenter relies on the hammer too much, you don’t take away the hammer, you make sure the worker knows about the power nailer, and the other tools and when to apply them.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 12, 2009 at 02:34

@v71

So which features would you like in your ideal langauge ?

I haven’t decided yet. It’s something that I discuss with myself on a daily basis.

But here are some minimum requirements:

  1. Don’t force anything onto the programmer (e.g. mandatory GC)

  2. Wherever possible, any language feature should be part of its libraries rather than being hard-coded into the compilers.

  3. Languages features should be axiomatic, by which I mean, if a feature can be constructued from other features, then it should not be a feature of the language. For example, for-loops should not be a built in construct as they can be built up from an if-statement and a goto. I imagine my ideal language would define a for loop as something like:

    for (init; cond; iter) {block} := init; start: if (!cond) goto end; block; iter; goto start; end:

Obviously the syntax there is missing a lot of information and there’s no way I’d actually advocate that syntax. I’m just demonstrating the fact that it could have been defined in terms of more axiomatic language features. Of course you need extra code there to handle breaks and continues, but that’s easy.

  1. Your code should have as much access to compiler information at compile time as possible. Compile time reflection is really important.

One thing that I’m currently mulling over is whether or not types are axiomatic. In one sense, types are just a set of requirements on the layout and semantics of the data that is passed around a program.

What I’m wondering is whether or not you could construct a language were the type system is part of the standard library rather than being hard-wired into the compiler.

Some languages have preconditions for their functions, for example:

float sqrt(float x)
precondition (x >= 0)
{ ... }

could it not be the case that the type is just a precondition, too?

float sqrt(x)
precondition(typeof(x) == float && x >= 0)
{ ... }

Of course, writing it in the second form would be tedious, so the first syntax could be provided as short-hand.

That’s just a few things. I could go on forever, but that’s what’s currently running round in my mind.

B7dcbc0c0f07253f25ff5c25fe38c081
0
SamuraiCrow 101 Nov 12, 2009 at 04:03

@poita

  1. Languages features should be axiomatic, by which I mean, if a feature can be constructued from other features, then it should not be a feature of the language. For example, for-loops should not be a built in construct as they can be built up from an if-statement and a goto. I imagine my ideal language would define a for loop as something like:

    for (init; cond; iter) {block} := init; start: if (!cond) goto end; block; iter; goto start; end:

Obviously the syntax there is missing a lot of information and there’s no way I’d actually advocate that syntax. I’m just demonstrating the fact that it could have been defined in terms of more axiomatic language features. Of course you need extra code there to handle breaks and continues, but that’s easy.

Your language has an extra goto inside the loop. It should be:

for (init; cond; iter) {block} :=
init;
goto start;
continue:
block;
iter;
start:
if (cond) goto continue;
end:

Note that the end label is only called by a break statement implemented as:

goto end;

@poita

  1. Your code should have as much access to compiler information at compile time as possible. Compile time reflection is really important.

One thing that I’m currently mulling over is whether or not types are axiomatic. In one sense, types are just a set of requirements on the layout and semantics of the data that is passed around a program.

What I’m wondering is whether or not you could construct a language were the type system is part of the standard library rather than being hard-wired into the compiler.

I’m tinkering with the idea of an extensible programming language also. I’m having fun with a PEG parser generator and that will probably be the basis for my work. But for now, I’m going to translate code into LLVM Assembly language.

This is getting off topic and should probably be its own thread but what do you want in your language, poita? BTW, you should try LLVM Assembly. There are a few preconditions to running it through the optimizer but it would have caught that loop inefficiency earlier.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 12, 2009 at 05:49

Your language has an extra goto inside the loop. It should be:

Hmm? I don’t see how that will do any less jumps.

This is getting off topic and should probably be its own thread but what do you want in your language, poita? BTW, you should try LLVM Assembly. There are a few preconditions to running it through the optimizer but it would have caught that loop inefficiency earlier.

I’m not sure what I want yet. Those things I listed are the minimum requirements.

I’ve been trying to think of ways to organize and parameterize code, but I’m struggling to find something that will consistently produce optimal code.

One particular problem that has stumped me is with list splicing.

As I’m sure many programmers know, splicing a linked list is a constant time operation if you don’t need to keep track of the size. If you do need to keep track then it becomes O(n).

But here’s the thing: it’s only O(n) because you don’t know how many list elements you’re removing. If you knew how many elements you were removing then you could write a splice function that accepted the number of elements removed as an additional parameter to do the splice in O(1) time.

The funny thing is: you *could* always know how many items you removed. List splicing requires that you have iterators for the positions that you want to splice, and the only way to get iterators from a list is to linearly traverse them. So, some time prior to the splice, you have traversed the list to get the splice position iterators – you just haven’t kept count of how many elements you’ve traversed. If you did, then the splice could continue in O(1).

But how would you deal with this? You could have retrieved the iterators using std::find. Should std::find also return a count of the number of elements traversed? Is that all it should return?

The STL algorithm functions are very good at returning useful information, and in Alexander Stapanov’s book, Elements of Programming he recommends that a function should return all useful information that it has computed in the process of its execution.

But how much useful information should you return? Should std::find also return a set of objects that it passed on the way? There are certainly scenarios where that information would be useful, but of course it would not be useful in the vast majority of cases.

Let’s just say you parameterize your functions so heavily that you could pick and choose what was calculated along the way. What about higher level functions that were composed of several of these highly parameterized functions? The number of parameters for that higher level function will grow exponentially.

As far as I’m aware, there are only a few options you have:

  1. Stop caring about linear optimizations and just accept that you can’t write optimal programs in a top-down fashion.

  2. Make the language declarative, so that you don’t actually write these functions, but instead write what you want them to accomplish and the compiler figures out the best way to implement them.

  3. Something else.

So that’s currently where I’m at, and I’m undecided about what to do. Reductionism or holism?

2b97deded6213469bcd87b65cce5d014
0
Mihail121 102 Nov 12, 2009 at 09:01

Look, poita, I generally agree on everything you’ve said, but you, I and A. Stapanov will leave this world unchanged as proposing ultimate solutions like ‘world does not fit into a tree – do not use OOP’ will make people wonder why not to when it has proven to work very well in practice. Can you suggest a method that allows simple design for complex relations and processes? Can you suggest how to reimplement Microsoft Office 2007 for example with another concept such that the development wont take years? There is the concept of complexity in the world, for example the Kolmogorov complexity. I need to invest an effort, and it has a certain lower bound, to achieve something. OOP helps us reduce the source code complexity towards the lower bound AND get a meaningful (for us) model of it. I see you want innovation, I see you have plans and ideas and so do I, but what do people want? Completely parameterized functions returning all potentially usefu information along the way? Yeah, right… In our engineer mind it’s a nice and useful idea, we know what’s going on, we know what we want and how it should be, but the 16 year old that SAP just hired and teached .NET does not care, is not aware of the situation, he’s happy to inherit the Dictionary class and customize it, because it works! That’s how reality is and it has always been like that. Think about something else, according to research humanity is ready to accept the fact that at least 100 people die in car accidents every week. It’s ready to accept two planes crash every year. Of course we can make things better, but nobody wants to, they are still considered high-technology and enough to keep people happy. People need concurrency? They use Go.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 12, 2009 at 10:02

@Mihail121

Look, poita, I generally agree on everything you’ve said, but you, I and A. Stapanov will leave this world unchanged as proposing ultimate solutions like ‘world does not fit into a tree – do not use OOP’ will make people wonder why not to when it has proven to work very well in practice.

Well first off, Stepanov already has changed the way programming works by co-inventing generic programming and developing the concept of iterators.

Second, if OOP works so well in practice then why has Go removed it? Why is C still such a popular language?

And where is your evidence that OOP is working “very well” in practice? It is used in practice, yes, but that doesn’t mean that it is working, or that it cannot be improved. Sure, programs are being written in the OOP style, but who’s to say that they couldn’t have been written twice as quickly or perform twice as well using some other method?

Can you suggest a method that allows simple design for complex relations and processes? Can you suggest how to reimplement Microsoft Office 2007 for example with another concept such that the development wont take years?

Not yet, but I’m working on it, and I like to think that I’m getting somewhere.

OOP helps us reduce the source code complexity towards the lower bound AND get a meaningful (for us) model of it.

Prove it, or at least show some justification for the claim. I’ve only found OOP useful when I need run time polymorphism, which is fairly rare.

I see you want innovation, I see you have plans and ideas and so do I, but what do people want? Completely parameterized functions returning all potentially usefu information along the way? Yeah, right…

No? Might want to read my last post again. The idea of completely parameterizing every function is absurd to me.

In our engineer mind it’s a nice and useful idea, we know what’s going on, we know what we want and how it should be, but the 16 year old that SAP just hired and teached .NET does not care, is not aware of the situation, he’s happy to inherit the Dictionary class and customize it, because it works!

20 years ago people just wrote min and max functions for floats, ints, doubles and whatever type they needed, and it worked! Does that mean that templates haven’t helped? No.

Again, just because we can get by with the status quo doesn’t mean that it cannot be improved, and it doesn’t mean that the status quo cannot change.

Of course we can make things better, but nobody wants to, they are still considered high-technology and enough to keep people happy.

Umm, what? Everything continually changes and improves. I’m not sure what you’re trying to say here.

5225bc0c3bf66f4c275c332de6388d1f
0
SyntaxError 101 Nov 12, 2009 at 10:42

All I can say is it isn’t for me because I don’t want to be forced to use a GC all the time. Also even though C++ isn’t perfect, it has the features I like so until something better (as defined by me) starts to take hold, I’m going to stick with it.

A638aa42130293f319eda7fa4ba121f4
0
fireside 141 Nov 12, 2009 at 15:17

I think what’s happened is that people use c or c++ for speed/low level libraries and managed languages as a layer above that. It would be hard to imagine something really changing that because there’s so much c code out there. Go may or may not end up being a specialized language, but that’s all it will be. I wish they would clean up c++ and make it more consistent but even that would be hard to do now. With it’s concurrency or whatever, Go is still “almost as fast as c”. Where is the advantage?

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Nov 12, 2009 at 15:21

I love it. In the past, lots of programmers complained about memory management.

Along came languages that solved this boilerplate-ish type of coding and “liberated” the developer.

So what happens?

Developers now bitch about being “forced” to use GC.

Funny how that works out, huh?

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Nov 12, 2009 at 15:59

@poita

  1. Languages features should be axiomatic, by which I mean, if a feature can be constructued from other features, then it should not be a feature of the language.

My god. I hope I never have to work with a language that you designed. Syntactic sugar helps a great deal with respect to productivity and code maintainability. Your argument could have just as well been to use assembly, because you can do anything with assembly. And why add, if you can increment? Why multiply, if you can add?

Why have closures, if you can have regular function pointers which can’t be defined in local scope and which can’t access the variables out of the local scope, which doesn’t matter because you can just as well put all the local variables in the struct and pass it to the global function. No thank you ma’am, I’m pretty fine with a language construct that allows me to express myself in a less verbose way which makes me more productive, my code less error-prone and more readable.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Nov 12, 2009 at 16:13

@.oisyn

Your argument could have just as well been to use assembly, because you can do anything with assembly.

I was going to suggest LoseThos :lol:

B7dcbc0c0f07253f25ff5c25fe38c081
0
SamuraiCrow 101 Nov 12, 2009 at 16:57

@poita

Hmm? I don’t see how that will do any less jumps.

It moved the unconditional GOTO outside the loop to the very beginning. Inside the loop it will only have the conditional GOTO.

-edit-

Has anybody else here besides me worked in LLVM Assembly? It seems to me that that, in conjunction with my partner’s LLVM-PEG parser generator could reach poita’s ideal.

2b97deded6213469bcd87b65cce5d014
0
Mihail121 102 Nov 12, 2009 at 18:16

Do not bash me, I’m on your side.
@poita

Well first off, Stepanov already has changed the way programming works by co-inventing generic programming and developing the concept of iterators.

Iterators – I agree. Generic programming – I do not consider it an invention, sorry.

Second, if OOP works so well in practice then why has Go removed it? Why is C still such a popular language?

C is my favorite language even, because it does not have OOP. OOP works well in practice, however. I guess Go removed it, because, as alphadog pointed out: “The issue is that OO and concurrent are orthogonal concepts.”

And where is your evidence that OOP is working “very well” in practice? It is used in practice, yes, but that doesn’t mean that it is working, or that it cannot be improved. Sure, programs are being written in the OOP style, but who’s to say that they couldn’t have been written twice as quickly or perform twice as well using some other method?

Oh, come on, not that discussion again. I have no evidence for something like that and I also have doupts. But I see software around and it seems to be doing its job well enough. It’s possible that it could perform better or be written easier with some other language or concept, but please, read this argument this time: structure is important feature of software (code) for us humans.

Not yet, but I’m working on it, and I like to think that I’m getting somewhere.

Show us what you have and let’s discuss it, I’ll be glad. Or contact me on ICQ, SKYPE, …, it’s a nice discussion.

20 years ago people just wrote min and max functions for floats, ints, doubles and whatever type they needed, and it worked! Does that mean that templates haven’t helped? No.

You really like templates, don’t you? Guess people didn’t use them back then, because no compiler or language supported them. But I’m certain every programmer who’ve written the same function for 7-8 types was sure a better idea would be to parameterize it.

Again, just because we can get by with the status quo doesn’t mean that it cannot be improved, and it doesn’t mean that the status quo cannot change.

It does not mean it’s bad either.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Nov 12, 2009 at 18:44

@poita

The funny thing is: you *could* always know how many items you removed. List splicing requires that you have iterators for the positions that you want to splice, and the only way to get iterators from a list is to linearly traverse them. So, some time prior to the splice, you have traversed the list to get the splice position iterators – you just haven’t kept count of how many elements you’ve traversed. If you did, then the splice could continue in O(1).

No you couldn’t. After you have the two iterators for splicing, there could be third iterator adding/removing values in between the iterators messing up the count values. I made design decision in my list implementation to maintain list size within the list class and only allow splicing of an entire list to another in O(1) time. It’s much more rare to splice than ask for the size of the list.

5225bc0c3bf66f4c275c332de6388d1f
0
SyntaxError 101 Nov 12, 2009 at 19:44

@alphadog

I love it. In the past, lots of programmers complained about memory management.

Along came languages that solved this boilerplate-ish type of coding and “liberated” the developer.

So what happens?

Developers now bitch about being “forced” to use GC.

Funny how that works out, huh?

Programmers aren’t a monolithic group. Coming from Fortran I never complained about memory management. In fact it was an improvement. These days I typically use my own GC. If I write it myself, I have full control over how it’s implemented and where I use it. I typically like reference counting because it’s simple and predictable yet I only tend to use it for semi-permanent data structures. I can still pass around normal pointers for most operations. I can also use normal pointers for back-pointers in tree structures and avoid many GC issues. This kind of thing is hard to do if someone hands you a GC system which you are forced to use as is.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Nov 12, 2009 at 21:31

Never said they were. That’s why I said “lots”, not “all”.

When the industry was extolling the virtue of using Java or .NET (when it came out), one of the reasons was to be freed from the burden of memory management that C++ imposed, was it not?

In fact, I meant to illustrate that language design is always a huge set of trade-offs. And, no matter what method you pick, you’ll always find a curmudgeon that wants it the opposite way. :)

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 13, 2009 at 00:44

@.oisyn

My god. I hope I never have to work with a language that you designed. Syntactic sugar helps a great deal with respect to productivity and code maintainability. Your argument could have just as well been to use assembly, because you can do anything with assembly. And why add, if you can increment? Why multiply, if you can add?

I think you may have misunderstood me. I meant that the features should not be *hard-coded* into the language. There will still be for loops, and they’ll work exactly as they do now, except that they’re part of the library and you can write similar looping constructs at your will.

At the lowest level, the language *will* be similar to assembly, however there will be mechanisms for abstraction so that you can write code at a high level.

Why have closures, if you can have regular function pointers which can’t be defined in local scope and which can’t access the variables out of the local scope, which doesn’t matter because you can just as well put all the local variables in the struct and pass it to the global function. No thank you ma’am, I’m pretty fine with a language construct that allows me to express myself in a less verbose way which makes me more productive, my code less error-prone and more readable.

Again, things like closures would not be part of the core language, but instead would be part of a library. The obvious advantage being that, if you don’t like how closures worked then you could modify the source yourself, or you could write a new style of closure that worked for your particular needs.

structure is important feature of software (code) for us humans.

I don’t see where I said that we should remove structure from languages. OOP is one way to structure programs. Is it the best? I don’t think so, and that’s why I’m trying to find better ways.

Show us what you have and let’s discuss it, I’ll be glad. Or contact me on ICQ, SKYPE, …, it’s a nice discussion.

I’d be happy to discuss it, but my ideas are still immature, and it’s all still in my head. I’ve already given a few characteristics of what the language would be like, but as I explained earlier I haven’t solved some core problems yet, and the solution to those could completely change the language (from being imperative to declarative). As such, I don’t see any point trying to develop anything concrete until I solve those problems.


I don’t mean to be come across as hostile. I just don’t understand your “people are happy, so let’s just live with what we’ve got, even if you think it could be improved.” That doesn’t really strike me as a good mindset for progress.

No you couldn’t. After you have the two iterators for splicing, there could be third iterator adding/removing values in between the iterators messing up the count values. I made design decision in my list implementation to maintain list size within the list class and only allow splicing of an entire list to another in O(1) time. It’s much more rare to splice than ask for the size of the list.

That doesn’t make it impossible, it just makes it more difficult. You know whether the 3rd iterator is between the first two or not (due to linear traversal), so you know how it is effecting the count.

Regardless, I only need to argue the specific case here. What I’m saying is that there *are* situations where having extra information could dramatically improve performance, but in many cases that information would only be available if you modify subfunctions.

A similar problem is stream fusion. Let’s say you wanted to do this:

for (int i = 0; i < n; ++i)
  foo();
for (int i = 0; i < n; ++i)
  bar();

Of course, this can be improved by joining the loops (provided bar() doesn’t rely on specific side-effects of foo()):

for (int i = 0; i < n; ++i)
{
  foo();
  bar();
}

But what if you had already factored out the loops?

foo_loop();
bar_loop();

The only way you can merge the loops then is to break the structure of your program (or hope the optimizer can do it for you).

And that really is what I see as a core problem of many imperative languages: they often force the programmer to choose between fast, unstructured code, and slow, structured code. I don’t believe that programmers should ever have to make that decision.

B7dcbc0c0f07253f25ff5c25fe38c081
0
SamuraiCrow 101 Nov 13, 2009 at 02:00

@poita

The optimizer cannot fix the first one to be like the second one or the third one to be like the second one. It can fix the third one to be like the first one because they are equivalent and subject to inlining (assuming the functions are called only once). The second one is not equivalent, however, due to the fact that each call to foo() is followed by a call to bar() in the second instance while the order is n calls to foo() followed by n calls to bar() in the first and third.

What could be done to speed up the code is that, since i is not used inside the loop, it could convert the code to be:

for (int i=n; i>0; --i)
  foo();
for (int i=n; i>0; --i)
  bar();

since comparisons to 0 are cheaper than comparisons to a constant.

Optimizers only replace codes with equivalent functionality. They don’t undo bad programming.

But as for your idea about extensible programming, it’s not original. There is a project of one of HP’s programmers on the back-burner at the XL programming language website on SourceForge.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 13, 2009 at 02:53

Sorry SamuraiCrow, I should have been more clear.

I wasn’t talking about any particular instance of an optimizer, but a hypothetical optimizer for the hypothetical language that could in theory merge those two loops if it found that their merged functionality was equivalent.

My point was that you have to either pick between structure (the third one) or performance (the second one), or just pick structure and hope that the optimizer can fix it for you.

As for XL. I’m aware of it, and it looks like it’s quite similar to what’s running round in my mind, but it doesn’t appear that they have any sort of solution to my list splicing scenario above. I’m really starting to think that the ideal language absolutely must be declarative. I don’t see any way around it.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Nov 13, 2009 at 03:06

How would you keep the indices in each iterator referring to the list up to date? I don’t see any practical solution to it which wouldn’t add non-constant overhead to every operation that modifies the list, but that would defeat the purpose.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 13, 2009 at 03:16

@JarkkoL

How would you keep the indices in each iterator referring to the list up to date? I don’t see any practical solution to it which wouldn’t add non-constant overhead to every operation that modifies the list, but that would defeat the purpose.

Well it wouldn’t be practical at all, that’s my point :) There is no practical solution using imperative, structural programming, but in theory it could be done.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Nov 13, 2009 at 03:21

Of course you could change the rules and say that every iterator referring to the list is invalidated upon list mutation, like with std::vector ;) Just dunno how it would change the usefulness of the list container, but iterators are usually perceived as short lifetime objects, so it might be a workaround (:

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Nov 13, 2009 at 03:36

@JarkkoL

Of course you could change the rules and say that every iterator referring to the list is invalidated upon list mutation, like with std::vector ;) Just dunno how it would change the usefulness of the list container, but iterators are usually perceived as short lifetime objects, so it might be a workaround (:

You wouldn’t need to do that, but you would need to modify a lot of code to keep counts everywhere, and furthermore, the code modifications would only be useful for specific cases.

Say that you have some list of numbers {1, 2, 3, 4, 5, 6, 7, 8}

and you get iterators into them (for splicing)

std::list<int>::iterator i = std::find(myList.begin(), myList.end(), 3);
std::list<int>::iterator j = std::find(myList.begin(), myList.end(), 6);

If you spliced using those. The splice would take linear time as you don’t know distance(i, j). But you could if find returned the index of i and j (which it can calculate without changing the complexity of the function).

Now let’s say that, before you do the splice, some other (remote) procedure goes through and removes even elements from the list. Obviously that’s going to invalidate your indexes, but *in theory* you could modify the function that removed the even elements so that it updated your indexes for you (which it could do quite easily, and again, without changing its time complexity).

Obviously you could add an arbitrary amount of complexity to this situation to make it totally unworkable, but you will always be able to keep track of the size without added complexity, provided that you’re willing to hack your way through all the code :)

As I said earlier, this is totally impractical, and I don’t think there is any practical solution in imperative, structural programming languages.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Nov 13, 2009 at 11:56

@poita

Now let’s say that, before you do the splice, some other (remote) procedure goes through and removes even elements from the list. Obviously that’s going to invalidate your indexes, but *in theory* you could modify the function that removed the even elements so that it updated your indexes for you (which it could do quite easily, and again, without changing its time complexity).

No you couldn’t, because you would have to change all the iterators referring to the list, which would change the time complexity of the remove/add operations of the list (remember, there can be any number of iterators referring to the list). It’s not the matter of the language but the problem with the data structure.

Anyway, I think changing the rules might actually be pretty good solution if you would like to have both constant time splice() and size(). More specifically you could have only splice() operation invalidated on iterators upon list item add/remove. This would also be easy to validate by having a counter in the list for mutations and storing the count in iterators when you retrieve them from the list, and upon splice() assert that the counts match.