# Google make yet another failed programming language

34 replies to this topic

### #21.oisyn

DevMaster Staff

• Moderators
• 1842 posts

Posted 12 November 2009 - 03:59 PM

poita said:

3. Languages features should be axiomatic, by which I mean, if a feature can be constructued from other features, then it should not be a feature of the language.
My god. I hope I never have to work with a language that you designed. Syntactic sugar helps a great deal with respect to productivity and code maintainability. Your argument could have just as well been to use assembly, because you can do anything with assembly. And why add, if you can increment? Why multiply, if you can add?

Why have closures, if you can have regular function pointers which can't be defined in local scope and which can't access the variables out of the local scope, which doesn't matter because you can just as well put all the local variables in the struct and pass it to the global function. No thank you ma'am, I'm pretty fine with a language construct that allows me to express myself in a less verbose way which makes me more productive, my code less error-prone and more readable.
-
Currently working on: the 3D engine for Tomb Raider.

### #22TheNut

Senior Member

• Moderators
• 1701 posts
• LocationCyberspace

Posted 12 November 2009 - 04:13 PM

.oisyn said:

Your argument could have just as well been to use assembly, because you can do anything with assembly.
I was going to suggest LoseThos :lol:
http://www.nutty.ca - Being a nut has its advantages.

### #23SamuraiCrow

Senior Member

• Members
• 459 posts

Posted 12 November 2009 - 04:57 PM

poita said:

Hmm? I don't see how that will do any less jumps.

It moved the unconditional GOTO outside the loop to the very beginning. Inside the loop it will only have the conditional GOTO.

-edit-

Has anybody else here besides me worked in LLVM Assembly? It seems to me that that, in conjunction with my partner's LLVM-PEG parser generator could reach poita's ideal.

### #24Mihail121

Senior Member

• Members
• 1059 posts

Posted 12 November 2009 - 06:16 PM

Do not bash me, I'm on your side.

poita said:

Well first off, Stepanov already has changed the way programming works by co-inventing generic programming and developing the concept of iterators.

Iterators -- I agree. Generic programming -- I do not consider it an invention, sorry.

Quote

Second, if OOP works so well in practice then why has Go removed it? Why is C still such a popular language?

C is my favorite language even, because it does not have OOP. OOP works well in practice, however. I guess Go removed it, because, as alphadog pointed out: "The issue is that OO and concurrent are orthogonal concepts."

Quote

And where is your evidence that OOP is working "very well" in practice? It is used in practice, yes, but that doesn't mean that it is working, or that it cannot be improved. Sure, programs are being written in the OOP style, but who's to say that they couldn't have been written twice as quickly or perform twice as well using some other method?

Oh, come on, not that discussion again. I have no evidence for something like that and I also have doupts. But I see software around and it seems to be doing its job well enough. It's possible that it could perform better or be written easier with some other language or concept, but please, read this argument this time: structure is important feature of software (code) for us humans.

Quote

Not yet, but I'm working on it, and I like to think that I'm getting somewhere.

Show us what you have and let's discuss it, I'll be glad. Or contact me on ICQ, SKYPE, ..., it's a nice discussion.

Quote

20 years ago people just wrote min and max functions for floats, ints, doubles and whatever type they needed, and it worked! Does that mean that templates haven't helped? No.

You really like templates, don't you? Guess people didn't use them back then, because no compiler or language supported them. But I'm certain every programmer who've written the same function for 7-8 types was sure a better idea would be to parameterize it.

Quote

Again, just because we can get by with the status quo doesn't mean that it cannot be improved, and it doesn't mean that the status quo cannot change.

It does not mean it's bad either.

### #25JarkkoL

Senior Member

• Members
• 475 posts

Posted 12 November 2009 - 06:44 PM

poita said:

The funny thing is: you *could* always know how many items you removed. List splicing requires that you have iterators for the positions that you want to splice, and the only way to get iterators from a list is to linearly traverse them. So, some time prior to the splice, you have traversed the list to get the splice position iterators -- you just haven't kept count of how many elements you've traversed. If you did, then the splice could continue in O(1).
No you couldn't. After you have the two iterators for splicing, there could be third iterator adding/removing values in between the iterators messing up the count values. I made design decision in my list implementation to maintain list size within the list class and only allow splicing of an entire list to another in O(1) time. It's much more rare to splice than ask for the size of the list.

### #26SyntaxError

Valued Member

• Members
• 139 posts

Posted 12 November 2009 - 07:44 PM

I love it. In the past, lots of programmers complained about memory management.

Along came languages that solved this boilerplate-ish type of coding and "liberated" the developer.

So what happens?

Developers now bitch about being "forced" to use GC.

Funny how that works out, huh?

Programmers aren't a monolithic group. Coming from Fortran I never complained about memory management. In fact it was an improvement. These days I typically use my own GC. If I write it myself, I have full control over how it's implemented and where I use it. I typically like reference counting because it's simple and predictable yet I only tend to use it for semi-permanent data structures. I can still pass around normal pointers for most operations. I can also use normal pointers for back-pointers in tree structures and avoid many GC issues. This kind of thing is hard to do if someone hands you a GC system which you are forced to use as is.

DevMaster Staff

• Moderators
• 1716 posts

Posted 12 November 2009 - 09:31 PM

Never said they were. That's why I said "lots", not "all".

When the industry was extolling the virtue of using Java or .NET (when it came out), one of the reasons was to be freed from the burden of memory management that C++ imposed, was it not?

In fact, I meant to illustrate that language design is always a huge set of trade-offs. And, no matter what method you pick, you'll always find a curmudgeon that wants it the opposite way.
Hyperbole is, like, the absolute best, most wonderful thing ever! However, you'd be an idiot to not think dogmatism is always bad.

### #28poita

Senior Member

• Members
• 322 posts

Posted 13 November 2009 - 12:44 AM

.oisyn said:

My god. I hope I never have to work with a language that you designed. Syntactic sugar helps a great deal with respect to productivity and code maintainability. Your argument could have just as well been to use assembly, because you can do anything with assembly. And why add, if you can increment? Why multiply, if you can add?

I think you may have misunderstood me. I meant that the features should not be *hard-coded* into the language. There will still be for loops, and they'll work exactly as they do now, except that they're part of the library and you can write similar looping constructs at your will.

At the lowest level, the language *will* be similar to assembly, however there will be mechanisms for abstraction so that you can write code at a high level.

Quote

Why have closures, if you can have regular function pointers which can't be defined in local scope and which can't access the variables out of the local scope, which doesn't matter because you can just as well put all the local variables in the struct and pass it to the global function. No thank you ma'am, I'm pretty fine with a language construct that allows me to express myself in a less verbose way which makes me more productive, my code less error-prone and more readable.

Again, things like closures would not be part of the core language, but instead would be part of a library. The obvious advantage being that, if you don't like how closures worked then you could modify the source yourself, or you could write a new style of closure that worked for your particular needs.

Quote

structure is important feature of software (code) for us humans.

I don't see where I said that we should remove structure from languages. OOP is one way to structure programs. Is it the best? I don't think so, and that's why I'm trying to find better ways.

Quote

Show us what you have and let's discuss it, I'll be glad. Or contact me on ICQ, SKYPE, ..., it's a nice discussion.

I'd be happy to discuss it, but my ideas are still immature, and it's all still in my head. I've already given a few characteristics of what the language would be like, but as I explained earlier I haven't solved some core problems yet, and the solution to those could completely change the language (from being imperative to declarative). As such, I don't see any point trying to develop anything concrete until I solve those problems.

---

I don't mean to be come across as hostile. I just don't understand your "people are happy, so let's just live with what we've got, even if you think it could be improved." That doesn't really strike me as a good mindset for progress.

Quote

No you couldn't. After you have the two iterators for splicing, there could be third iterator adding/removing values in between the iterators messing up the count values. I made design decision in my list implementation to maintain list size within the list class and only allow splicing of an entire list to another in O(1) time. It's much more rare to splice than ask for the size of the list.

That doesn't make it impossible, it just makes it more difficult. You know whether the 3rd iterator is between the first two or not (due to linear traversal), so you know how it is effecting the count.

Regardless, I only need to argue the specific case here. What I'm saying is that there *are* situations where having extra information could dramatically improve performance, but in many cases that information would only be available if you modify subfunctions.

A similar problem is stream fusion. Let's say you wanted to do this:
for (int i = 0; i < n; ++i)

foo();

for (int i = 0; i < n; ++i)

bar();

Of course, this can be improved by joining the loops (provided bar() doesn't rely on specific side-effects of foo()):

for (int i = 0; i < n; ++i)

{

foo();

bar();

}

But what if you had already factored out the loops?

foo_loop();

bar_loop();

The only way you can merge the loops then is to break the structure of your program (or hope the optimizer can do it for you).

And that really is what I see as a core problem of many imperative languages: they often force the programmer to choose between fast, unstructured code, and slow, structured code. I don't believe that programmers should ever have to make that decision.

### #29SamuraiCrow

Senior Member

• Members
• 459 posts

Posted 13 November 2009 - 02:00 AM

@poita

The optimizer cannot fix the first one to be like the second one or the third one to be like the second one. It can fix the third one to be like the first one because they are equivalent and subject to inlining (assuming the functions are called only once). The second one is not equivalent, however, due to the fact that each call to foo() is followed by a call to bar() in the second instance while the order is n calls to foo() followed by n calls to bar() in the first and third.

What could be done to speed up the code is that, since i is not used inside the loop, it could convert the code to be:
for (int i=n; i>0; --i)
foo();
for (int i=n; i>0; --i)
bar();
since comparisons to 0 are cheaper than comparisons to a constant.

Optimizers only replace codes with equivalent functionality. They don't undo bad programming.

But as for your idea about extensible programming, it's not original. There is a project of one of HP's programmers on the back-burner at the XL programming language website on SourceForge.

### #30poita

Senior Member

• Members
• 322 posts

Posted 13 November 2009 - 02:53 AM

Sorry SamuraiCrow, I should have been more clear.

I wasn't talking about any particular instance of an optimizer, but a hypothetical optimizer for the hypothetical language that could in theory merge those two loops if it found that their merged functionality was equivalent.

My point was that you have to either pick between structure (the third one) or performance (the second one), or just pick structure and hope that the optimizer can fix it for you.

As for XL. I'm aware of it, and it looks like it's quite similar to what's running round in my mind, but it doesn't appear that they have any sort of solution to my list splicing scenario above. I'm really starting to think that the ideal language absolutely must be declarative. I don't see any way around it.

### #31JarkkoL

Senior Member

• Members
• 475 posts

Posted 13 November 2009 - 03:06 AM

How would you keep the indices in each iterator referring to the list up to date? I don't see any practical solution to it which wouldn't add non-constant overhead to every operation that modifies the list, but that would defeat the purpose.

### #32poita

Senior Member

• Members
• 322 posts

Posted 13 November 2009 - 03:16 AM

JarkkoL said:

How would you keep the indices in each iterator referring to the list up to date? I don't see any practical solution to it which wouldn't add non-constant overhead to every operation that modifies the list, but that would defeat the purpose.

Well it wouldn't be practical at all, that's my point :) There is no practical solution using imperative, structural programming, but in theory it could be done.

### #33JarkkoL

Senior Member

• Members
• 475 posts

Posted 13 November 2009 - 03:21 AM

Of course you could change the rules and say that every iterator referring to the list is invalidated upon list mutation, like with std::vector ;) Just dunno how it would change the usefulness of the list container, but iterators are usually perceived as short lifetime objects, so it might be a workaround (:

### #34poita

Senior Member

• Members
• 322 posts

Posted 13 November 2009 - 03:36 AM

JarkkoL said:

Of course you could change the rules and say that every iterator referring to the list is invalidated upon list mutation, like with std::vector ;) Just dunno how it would change the usefulness of the list container, but iterators are usually perceived as short lifetime objects, so it might be a workaround (:

You wouldn't need to do that, but you would need to modify a lot of code to keep counts everywhere, and furthermore, the code modifications would only be useful for specific cases.

Say that you have some list of numbers {1, 2, 3, 4, 5, 6, 7, 8}

and you get iterators into them (for splicing)

std::list<int>::iterator i = std::find(myList.begin(), myList.end(), 3);

std::list<int>::iterator j = std::find(myList.begin(), myList.end(), 6);

If you spliced using those. The splice would take linear time as you don't know distance(i, j). But you could if find returned the index of i and j (which it can calculate without changing the complexity of the function).

Now let's say that, before you do the splice, some other (remote) procedure goes through and removes even elements from the list. Obviously that's going to invalidate your indexes, but *in theory* you could modify the function that removed the even elements so that it updated your indexes for you (which it could do quite easily, and again, without changing its time complexity).

Obviously you could add an arbitrary amount of complexity to this situation to make it totally unworkable, but you will always be able to keep track of the size without added complexity, provided that you're willing to hack your way through all the code :)

As I said earlier, this is totally impractical, and I don't think there is any practical solution in imperative, structural programming languages.

### #35JarkkoL

Senior Member

• Members
• 475 posts

Posted 13 November 2009 - 11:56 AM

poita said:

Now let's say that, before you do the splice, some other (remote) procedure goes through and removes even elements from the list. Obviously that's going to invalidate your indexes, but *in theory* you could modify the function that removed the even elements so that it updated your indexes for you (which it could do quite easily, and again, without changing its time complexity).
No you couldn't, because you would have to change all the iterators referring to the list, which would change the time complexity of the remove/add operations of the list (remember, there can be any number of iterators referring to the list). It's not the matter of the language but the problem with the data structure.

Anyway, I think changing the rules might actually be pretty good solution if you would like to have both constant time splice() and size(). More specifically you could have only splice() operation invalidated on iterators upon list item add/remove. This would also be easy to validate by having a counter in the list for mutations and storing the count in iterators when you retrieve them from the list, and upon splice() assert that the counts match.

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users