OpenBSD Journal

BSD Licensed PCC Compiler Imported

Contributed by merdely on from the pretty-cool-compiler dept.

Anders Magnusson's BSD-licensed pcc compiler has been imported into CVS. He wrote to NetBSD's tech-toolchain list:

It is not yet bug-free, but it can compile the i386 userspace. The big benefit of it (apart from that it's BSD licensed, for license geeks :-) is that it is fast, 5-10 times faster than gcc, while still producing reasonable code. The only optimization added so far is a multiple-register-class graph-coloring register allocator, which may be one of the best register allocators today. Conversion to SSA format is also implemented, but not yet the phi function. Not too difficult though, after that strength reduction is high on the list.

Anders continues:

It is also quite simple to port, writing the basics for i386 took three hours (hello world) and complete port (pretty much as it is right now) two days.

I have added most of the C99 stuff (it is supposed to be a c99 compiler) but some stuff is still missing, like the ability to do variable declarations anywhere (requires some rewriting of the yacc code).

NetBSD also imported pcc into pkgsrc. The commit message gives a little bit of pcc's history:

The compiler is based on the original Portable C Compiler by S. C. Johnson, written in the late 70's. Even though much of the compiler has been rewritten, some of the basics still remain.

The intention is to write a C99 compiler while still keeping it small, simple, fast and understandable. I think of it as if it shall be able to compile and run on PDP11 (even if it may not happen in reality). But with this in mind it becomes important to think twice about what algorithms are used.

The compiler is conceptually structured in two parts; pass1 which is language-dependent, does parsing, typechecking and build trees, and pass2 which is mostly language-independent.

About 50% of the frontend code and 80% of the backend code has been rewritten. Most stuff is written by me, with the exception of the data-flow analysis part and the SSA conversion code which is written by Peter A Jonsson, and the Mips port that were written as part of a project by undergraduate students at LTU.

Otto Moerbeek (otto@) wrote to the pcc mailing list:

with some minor modifications to our source tree I'm able to build (and run) large parts of bin, sbin, usr.bin and usr.sbin of openbsd. src/lib needs some not yet available features, mostly asm related stuff.

One thing that is missing is __attribute__ support. Currently, our sys/cdefs.h remove those for ! GCC > 2.5. But we will really need it, for at least __packed__ and things like __attribute__((section(".eh_frame"), aligned(4)))

as can be found in src/lib/csu/common_elf/crtbegin.c

But, as an example, complex programs like ksh and ssh run (linked against libs built with gcc)!

Of course the road is still long, but things really look promising,

-Otto

(Comments are closed)


  1. By Anonymous Coward (85.178.105.78) on

    Cool.. would be crazyto have a compiler wich compiles the whole OS in lets say 1/5 of the time needed now + "more bug free". :]

    maybe OpenBSD should rise a little "Project" to get developers to join this effort.

    1. By Anonymous Coward (62.99.143.190) on

      > Cool.. would be crazyto have a compiler wich compiles the whole OS in lets say 1/5 of the time needed now + "more bug free". :]
      >
      > maybe OpenBSD should rise a little "Project" to get developers to join this effort.

      I recall there is a far more modern compiler architecture being worked on by an academic group, which Apple has shown interest, and it has non-GPL license. It also supports C++ I think. Given the short description posted it looks like this basically is going to be rewrite of the whole thing. If you guys must take this on, maybe you can approach Apple since they must want a compiler they can ultimately control, but they will want object-c and 64-bits.

      I am saying think this through and carefully. Rewriting a giant suite of programs just because you don't agree with the philosophy behind it sounds awful to people who have no stakes in BSD licenses.

      1. By Anonymous Coward (74.13.45.175) on

        > I am saying think this through and carefully. Rewriting a giant suite of programs just because you don't agree with the philosophy behind it sounds awful to people who have no stakes in BSD licenses.

        It's not just the licence that is a concern about the GCC suite, it's dropping support for hardware that OpenBSD supports, it's fluctuating compilation quality and it's licence are all matters for concern to users.

        1. By Anonymous Coward (88.161.54.214) on

          > It's not just the licence that is a concern about the GCC suite, it's dropping support for hardware that OpenBSD supports, it's fluctuating compilation quality and it's licence are all matters for concern to users.

          Well, good luck with it then. I will check back in a few years.

          Yes, license matter so much to users that they read through the license of every program they use. Strange, isn't it? Of things, passionate about licenses for non-lawyers; like anyone is going to believe that.

          1. By Anonymous Coward (203.65.245.7) on

            > Yes, license matter so much to users that they read through the license of every program they use.

            Exactly what part of "not just the license" did you not understand?


        2. By Marc Espie (213.41.185.88) espie@openbsd.org on

          > > I am saying think this through and carefully. Rewriting a giant suite of programs just because you don't agree with the philosophy behind it sounds awful to people who have no stakes in BSD licenses.
          >
          > It's not just the licence that is a concern about the GCC suite, it's dropping support for hardware that OpenBSD supports, it's fluctuating compilation quality and it's licence are all matters for concern to users.

          The licence is just the top of the iceberg.

          GCC is developed by people who have vastly different goals from us. If you go back and read the GCC lists, you'll notice several messages by me where I violently disagree with the direction it's following. Here is some *more* flame material.

          - GCC is mostly a commercial compiler, these days. Cygnus software has been bought by redhat. Most GCC development is done by commercial linux distributors, and also Apple. They mostly target *fast* i386 architectures and PowerPC. A lot of work has been done on specmarks, *but* the compiler is getting bigger and bigger, and slower and slower (very much so).

          - GCC warnings are not *really* useful. The -Wall flag shows many right things, and quite a few wrong issues.

          - There is a lot of churn in GCC which ends up with it no longer supporting some architectures that are still relevant to us.

          - The whole design of GCC is perverted so that someone cannot easily extract a front-end or back-end. This is broken by design, as the GPL people do believe this would make it easier for commercial entities to `steal' a front-end or back-end and attach it to a proprietary code-generator (or language). This is probably true. This also makes it impossible to write interesting tools, such as intermediate analyzers. This also makes it impossible to plug old legacy back-ends for old architectures into newer compilers.

          - As a result, you cannot have the new interesting stuff from newer GCC without also losing stuff... every GCC update is an engineering nightmare, because there is NO simple choice. You gain some capabilities, and you also lose some important stuff.

          - it's also very hard to do GCC development. Their branching system makes it very likely that some important work is falling between the cracks (and this happens all the time). If you develop code for GCC, you must do it on the most recent branch, which is kind of hard to do if your platform is currently broken (happens *all the time* if you're not running linux/i386). Even when you conform, it's hard to write code to the GNU coding standards, which are probably the most illegible coding guidelines for C. It's so obvious it was written by a lisp programmer. As a result, I've even lost interest into rewriting and getting in the GCC repository a few pieces.

          - some of their most recent advances do not have a chance to work on OpenBSD, like preparsed includes, which depend on mmap() at a fixed location.

          - there are quite a few places in GCC and G++ where you cannot have full functionality without having a glibc-equivalent around.

          - some of the optimisation choices are downright dangerous, and wrong for us (like optimizing memory fills away, even if they deal with crypto keys).

          - don't forget the total nightmare of autoconf/libtool/automake. Heck, even the GCC people have taken years to update their infrastructure to a recent autoconf. And GCC is *the only program in the ports tree* that actually uses its own libtool. Its configuration and reconfiguration fails abysmally when you try to use a system-wide libtool.

          I could actually go on for pages...

          I've actually been de facto maintainer of GCC on OpenBSD for a few years by now, and I will happily switch to another compiler, so frustrating has been the road with GCC.

          1. By Anonymous Coward (91.163.143.182) on

            Maybe they are accepting patch?

            1. By Anonymous Coward (193.200.150.45) on

              > Maybe they are accepting patch?
              >

              maybe if you'd done your research you'd see that they didn't.

            2. By Anonymous Coward (68.100.130.1) on

              > Maybe they are accepting patch?
              >
              If this is a joke, it's hilarious.

              Actually, if this isn't a joke, it's still hilarious. >:)

            3. By Pizza if your friend (68.125.31.8) on

              To quote Espie;
              "As a result, I've even lost interest into rewriting and getting in the GCC repository a few pieces."

              That answers your question; the code base is a mess he does not want to work with.

            4. By rookie (87.166.208.208) on

              > Maybe they are accepting patch?
              >

              Ehrrm, gcc fills my terminal full of this crappy unreadable lines for hours, is this normal?

          2. By Pizza is your friend (68.125.31.8) on

            Thanks. Very insightful.

          3. By Anonymous Coward (134.76.62.65) on

            > - GCC warnings are not *really* useful. The -Wall flag shows many right things, and quite a few wrong issues.

            Then you probably should fix your code. The flags I use for userland programs are "-Wall -Waggregate-return -Wmissing-declarations -Wmissing-prototypes -Wredundant-decls -Wshadow -Wstrict-prototypes -Winline -pipe" and the only thing that _may_ get annoying is the "variable may be uninitialized" one, and it's not easy, because that one the compiler cannot figure out all the time without spending more time (and you possibly making the statement of that it is slow, again)

            1. By Anonymous Coward (68.148.4.19) on

              >
              > Then you probably should fix your code. The flags I use for userland programs are "-Wall -Waggregate-return -Wmissing-declarations -Wmissing-prototypes -Wredundant-decls -Wshadow -Wstrict-prototypes -Winline -pipe" and the only thing that _may_ get annoying is the "variable may be uninitialized" one, and it's not easy, because that one the compiler cannot figure out all the time without spending more time (and you possibly making the statement of that it is slow, again)

              Ugh, then we say "may be uninitialized"? Why not just warn with
              "Program may contain bugs" all the time? Seriously. Why then not
              just warn with "variable *IS* uninitialized" when you are 100% sure
              that it is not, and used in such a state? Makes 100% more sense to
              me.

              -T.

      2. By Ray Lai (ray) on undeaditor

        I am saying think this through and carefully. Rewriting a giant suite of programs just because you don't agree with the philosophy behind it sounds awful to people who have no stakes in BSD licenses.

        pcc was imported because it is fast and well designed. We have been trying to get rid of gcc for years because it is slow, buggy, and unmaintainable. The license is just the cherry on top.

        1. By Anonymous Coward (18.243.2.53) on

          > I am saying think this through and carefully. Rewriting a giant suite of programs just because you don't agree with the philosophy behind it sounds awful to people who have no stakes in BSD licenses.
          >
          > pcc was imported because it is fast and well designed. We have been trying to get rid of gcc for years because it is slow, buggy, and unmaintainable. The license is just the cherry on top.

          "Have been trying" sounds like you gave up.. rly.

          Why not getting rid of GCC for some archs?
          Not all OpenBSD archs do support the same GCC anyway.
          Even GCC itself already droped architectures OpenBSD supports and this leads to a split (GCC3 vs GCC2, compatibility may could become a issue sometimes..).

          If some Compiler Experts may would sit down and take some time (lets say 6 months) they could add f.e. support for architectures the GCC droped already wich would rise the interest of NetBSD as well for sure.
          So there would be more progress and more progress...

          And finaly we may get rid of GCC in 2 years on all supported architectures.

          The OpenBSD Crew would simply make the compiler as secure as possible and netBSD guys are responseable to port it to the toasters! :)
          And FreeBSD guys could make it displaying a nice Movie or so during compiling.. *hrhr* (Well FreeBSd doesn#t care for freedom anyway so drop my comment about FoolBSD anyway) :]

        2. By Anonymous Coward (213.130.142.209) on

          > pcc was imported because it is fast and well designed. We have been trying to get rid of gcc for years because it is slow, buggy, and unmaintainable. The license is just the cherry on top.

          I feel this import was hasty.

          ragge has been working on this on and off for some years now, it is not ready for the prime time and still work to be done. Why import it to OpenBSD tree now, making a fork already before its released? Better, if somebody wants to work on that, to go to ragge and join with his efforts to make it good for all. There is not so many people that we (all BSD's) can afford to have our own set of developers working on the same project separately.

          also, there is the advertising clause .. which IIRC puffy doese not like?

          1. By Anonymous Coward (74.13.45.175) on

            > also, there is the advertising clause .. which IIRC puffy doese not like?

            That clause is easier to deal with than the GPL.

          2. By Todd T. Fries (todd) todd on http://todd.fries.net/

            > > pcc was imported because it is fast and well designed. We have been trying to get rid of gcc for years because it is slow, buggy, and unmaintainable. The license is just the cherry on top.
            >
            > I feel this import was hasty.
            >
            > ragge has been working on this on and off for some years now, it is not ready for the prime time and still work to be done. Why import it to OpenBSD tree now, making a fork already before its released? Better, if somebody wants to work on that, to go to ragge and join with his efforts to make it good for all. There is not so many people that we (all BSD's) can afford to have our own set of developers working on the same project separately.
            >
            > also, there is the advertising clause .. which IIRC puffy doese not like?
            >

            Nobody is forking anything. The code exists in the OpenBSD repository for the convenience of anyone who wants to utilize it on OpenBSD. Changes are sync'ed back and forth between the two codebases. It's much easier to work on something in a repository that you have commit access to than it is to pass diffs all over creation and back.

      3. By Friedrich (91.19.50.36) on http://www.q-software-solutions.de

        This topic is quite interesting and I can see both sides. However one thing seems to be understressed.
        GCC is large and by that I mean very very very large
        and it supports all kind of "C" based languages. which surely contributes to the code base quite a bit.
        I'd argue that not-one of the gcc developers really can tell what's going on really. And according to the docs I'd argue even a lot together can not point it out.. or they do not care or whatever

        Howerver we can ask ourselves what would be if gcc would be a C-Compiler only. One can't deny it would be much smaller, now with this "beeing" smaller there must be less bugs. So we can safely assume that this "thought" compiler would be much more reliable, and understandable, and I bet it would be easier to port also. I'd assume the generated code would be smaller also and I'd think that it could compile faster also.

        However we have serious problems with security in C and surely much more in C++. The costs of this insecurity can hardly be overestimated. So as much as I'd like C one has to think about the alternatives and "future".

        And this future seems to be computer with a lot of processors, and there we run into serious problems also, programming with threads can't e called easy but now assume you have not "just" threads but a lot of processors also. Doing programming for that without help from tools seems to be a very poor choice. Now add to that the problems with no bound checking, manual memory management and all that, that sounds not as a programmers heaven....

        However doing bound checking seems not to be that expensive. If you just see how well Ocaml performs in this area. Even in other aspects (dealing with state etc it seems to be a good idea borrowing from those "functional" languages.

        I think before starting a journa of replacing gcc with some other C compiler it would be worth to check out alternatives in areas like
        - resource management
        - shared state
        - a "proper" String data-structure
        - bound checking
        - the implicaton of multi-core systems

        It might be a good idea also to see where most of the time was spend for debugging. If that all above isn't a problem really than knowing that would be really good. Because than one can say. Just let's us take a "simple" C compiler...

        However my feeling is that at least 70-75% of all debugging is spend in those areas.


        Have nice programming day
        Friedrich



      4. By Anonymous Coward (83.138.136.90) on

        > I am saying think this through and carefully. Rewriting a giant suite of programs just because you don't agree with the philosophy behind it sounds awful to people who have no stakes in BSD licenses.

        Funnily enough that is exactly how GNU started.

    2. By McGyver (62.178.150.152) mcgyver@foo.de on

      > Cool.. would be crazyto have a compiler wich compiles the whole OS in lets say 1/5 of the time needed now + "more bug free". :]
      >
      > maybe OpenBSD should rise a little "Project" to get developers to join this effort.

      And the result would be 20 times slower...

      1. By Anonymous Coward (88.84.142.165) on

        > > Cool.. would be crazyto have a compiler wich compiles the whole OS in lets say 1/5 of the time needed now + "more bug free". :]
        > >
        > > maybe OpenBSD should rise a little "Project" to get developers to join this effort.
        >
        > And the result would be 20 times slower...

        I don't angree. F.e. the Router-Daemons OpenBSD created are faster and more reliable. Also PF is the fastest SW FW I do know. Also OpenSSH got a speed up lately and other things as well.

        I think OpenBSD/netBSD could create a much better compiler then G[eneral]N[othing]U[sefull].

        And that is a serious belief because as pointed out: GCC is badly buggy. ;-/

        1. By McGyver (62.178.150.152) mcgyver@foo.de on

          > > > Cool.. would be crazyto have a compiler wich compiles the whole OS in lets say 1/5 of the time needed now + "more bug free". :]
          > > >
          > > > maybe OpenBSD should rise a little "Project" to get developers to join this effort.
          > >
          > > And the result would be 20 times slower...
          >
          > I don't angree. F.e. the Router-Daemons OpenBSD created are faster and more reliable. Also PF is the fastest SW FW I do know. Also OpenSSH got a speed up lately and other things as well.
          >
          > I think OpenBSD/netBSD could create a much better compiler then G[eneral]N[othing]U[sefull].
          >
          > And that is a serious belief because as pointed out: GCC is badly buggy. ;-/

          That's the fuck*n problem with you BSD guys. We speak here about damn not optimizing compiler. Of course your binaries are damn slower when they are not optimized. Doesn't matter what.

          And if you really believe you can have a Compiler that is five times faster than GCC, has less Bugs and creates equally good binaries for the same amount of targets than fuck*n go for it. But sooner or later reality might catch you...

          1. By Anonymous Coward (71.139.238.156) on

            > That's the fuck*n problem with you BSD guys. We speak here about damn not optimizing compiler. Of course your binaries are damn slower when they are not optimized. Doesn't matter what.

            That's the f'n problem with you trolls. A good working system comes first, optimization comes last. Can't even censor yourselves properly, either.

            > And if you really believe you can have a Compiler that is five times faster than GCC, has less Bugs and creates equally good binaries for the same amount of targets than fuck*n go for it. But sooner or later reality might catch you...

            GCC exists beyond Linux and BSD. Archs and OSes come and go for GCC and will continue to do so for quite a long time. It'll still be here if the project decides to use it.

            1. By Igor Sobrado (sobrado) sobrado@ on

              > That's the f'n problem with you trolls. A good working system comes first, optimization comes last. Can't even censor yourselves properly, either.

              Indeed. Quoting Donald E. Knuth on his Turing Award lecture: "Premature optimization is the root of all evil (or at least most of it) in programming."

              > GCC exists beyond Linux and BSD. Archs and OSes come and go for GCC and will continue to do so for quite a long time. It'll still be here if the project decides to use it.

              It is bad for our project (and for other projects that depend on a C compiler supporting the less usual computer architectures) the lack of stability in the set of available architectures in gcc. On the other hand, gcc is slow (and it becomes worse on each release), too large, complex, and buggy. It is better having a small, fast and maintainable C compiler that strictly follows the C standards. It is better providing documentation as manual pages, either in -mandoc or -man format, than in GNU info format too.

              As you write, gcc will be available yet for building software packages that depend on non-standard extensions to the C language and other GNUisms. In my humble opinion, pkgsrc and ports are the right places for it.

          2. By Gilles CHEHADE (193.47.80.25) veins@evilkittens.org on http://www.evilkittens.org/dlog/veins/

            > That's the fuck*n problem with you BSD guys. We speak here about damn not optimizing compiler. Of course your binaries are damn slower when they are not optimized. Doesn't matter what. > > And if you really believe you can have a Compiler that is five times faster than GCC, has less Bugs and creates equally good binaries for the same amount of targets than fuck*n go for it. But sooner or later reality might catch you... meanwhile, will you spare us with your comments ?

          3. By Anonymous Coward (83.138.189.76) on

            If you don't understand the issues involved then don't say anything.

        2. By Anonymous Coward (202.7.176.130) on

          > I think OpenBSD/netBSD could create a much better compiler then >G[enerally]N[othing]U[sefull].

          I love it. ;-)

    3. By squiggleslash (66.32.106.126) on

      > Cool.. would be crazyto have a compiler wich compiles the whole OS in lets say 1/5 of the time needed now + "more bug free". :]
      >
      > maybe OpenBSD should rise a little "Project" to get developers to join this effort.

      It's fast, but only because it doesn't do the same level of optimizations that GCC does, nor have the same open architecture that allows for multiple language support.

      I'm sure GCC can be improved, and I'm sure a *pure* C compiler can be a little more optimal than GCC at its best in terms of speed of compilation, but don't expect PCC to have quite the same level of usability in practice. GCC has evolved the way it has because the speed of compilation has been considered secondary to the overall functionality people want to see in it - especially C++ support and small, fast, code.

      In the long term, projects like GCJ may well help make C itself less relevant. That's still a controversial thing to say - there are many people who've convinced themselves that it's impossible to write something like an operating system kernel in a managed language. I suspect as time goes by, and the benefits of such an approach become more obvious, that view will slowly disappear. Requires someone to be brave and develop the code though in the face of certain ridicule.

      1. By Ray Lai (ray) on http://cyth.net/~ray/

        It's fast, but only because it doesn't do the same level of optimizations that GCC does, nor have the same open architecture that allows for multiple language support.

        I'm sure GCC can be improved, and I'm sure a *pure* C compiler can be a little more optimal than GCC at its best in terms of speed of compilation, but don't expect PCC to have quite the same level of usability in practice. GCC has evolved the way it has because the speed of compilation has been considered secondary to the overall functionality people want to see in it - especially C++ support and small, fast, code.

        Don't misunderestimate the importance of a fast compiler. Development time spent waiting for the compiler is completely wasted. The faster the compiler, the faster developers can go back to coding, testing, or debugging. Five to ten times as many snapshots can be created for wider user testing. Broken trees can be detected sooner.

        As an OpenBSD developer, I do not see gcc's multiple language support as a feature. Unlike you, I do see pcc's speed as much needed functionality.

        If you want gcc's supposedly superior code, you can recompile your system with gcc. Don't come crying when you hit one of its bugs and nobody can fix it, though.

        1. By squiggleslash (66.32.106.126) on

          > I do not see gcc's multiple language support as a feature. Unlike you, I do see pcc's speed as much needed functionality.

          *Unlike* me?

          Can you not put words into my mouth, please? I most certainly never said that PCC's speed wasn't useful, my post was pointing out that all of this comes at a price. Support for multiple languages, and a lack of optimization, are two of those prices.

          You've indicated you're happy to lose the former, though I wonder if you really have thought of the consequences of that. (Clue - the GCC will still end up on your system, co-existing with PCC, with the complications that guarantees.) Multiple languages is certainly a feature, not all core code these days is written in pure, vanilla, C. While PCC may help with a few modules such as the kernel, the more everyday use of installing applications and "big app development" is likely to continue to require GCC use,

          As I said above, I'm hoping that C will start to die out once GCJ reaches a usability threshold. I love the language, I really do, but 99% of the problems with security and stability in computing today come from the fact that C has substantial design flaws inherent in its unmanaged design.

          BTW everyone keeps posting how bug ridden GCC is. You know, if it was that bad, I suspect there really would have been a serious effort to replace it by now, rather than one just starting. I've never had a problem with it. I don't doubt that bugs exist, but people here are even posting examples of things that clearly aren't issues: "-O3" may break your code, but everyone knows that, that's why it's a non-default option that only people who know what they're doing use. There's nothing wrong with saying "Ok, we're going to give you a bunch of optimization options - these three will not break anything, this one here makes certain assumptions so you need to make sure your code fits these assumptions and write your code around it if you want to use it, this one makes even more assumptions, {...etc...}." It's called having choices.

          Here's a more interesting idea than PCC. Has anyone on the OpenBSD team thought, given OpenBSD's traditional solid code bias, about creating a C derived language that provides the managed environment needed to avoid corrupted pointers, stacks, etc, without the overheads of Java? Something that the kernel could easily be converted over to?

          There are good C compilers already, and frankly, there's not a lot wrong with GCC. The real contribution that can be made to computing right now comes from anyone willing to go further than just port another C compiler. Especially when it looks like that ported C compiler will be little better than the one it replaces once it's actually capable of generating comparable code. (Is this really about performance? Or is this license zealotry rearing its ugly head, again?)

          1. By Igor Sobrado (sobrado) sobrado@ on

            > Here's a more interesting idea than PCC. Has anyone on the OpenBSD team thought, given OpenBSD's traditional solid code bias, about creating a C derived language that provides the managed environment needed to avoid corrupted pointers, stacks, etc, without the overheads of Java? Something that the kernel could easily be converted over to?

            Hmmm... are you serious? Sorry, it makes no sense.

            1. By squiggleslash (66.32.106.126) on

              > > Here's a more interesting idea than PCC. Has anyone on the OpenBSD team thought, given OpenBSD's traditional solid code bias, about creating a C derived language that provides the managed environment needed to avoid corrupted pointers, stacks, etc, without the overheads of Java? Something that the kernel could easily be converted over to?
              >
              > Hmmm... are you serious? Sorry, it makes no sense.

              What, exactly, makes no sense? You're not seriously advocating pointers, stack overflows, etc, are you?

              I don't really understand what you believe makes no sense.

              1. By Anonymous Coward (84.221.81.145) on

                >
                > What, exactly, makes no sense? You're not seriously advocating pointers, stack overflows, etc, are you?
                >
                > I don't really understand what you believe makes no sense.

                First of all you can't catch all errors.. so once you have found how to avoid these errors there will be other errors that you can do.

                Secondly catching error at runtime like Java has an unavoidable cost and is not so obviusly a good idea.

                Probably the best is offering exact control on what to check and what not to check and easily switch between the two. (like STL .at() vs C [])

                1. By Anonymous Coward (66.32.106.126) on

                  > >
                  > > What, exactly, makes no sense? You're not seriously advocating pointers, stack overflows, etc, are you?
                  > >
                  > > I don't really understand what you believe makes no sense.
                  >
                  > First of all you can't catch all errors.. so once you have found how to avoid these errors there will be other errors that you can do.

                  That's not really an argument. You can't catch some errors, but you can use a programming language that makes certain errors impossible and you certainly can develop a language that does that. This reduces the number of errors and makes certain types of error impossible. That's a good thing, especially when it's taking care of the types of error that are most common. What proportion of security holes have you seen that weren't buffer overflows or issues with pointers?

                  > Secondly catching error at runtime like Java has an unavoidable cost and is not so obviusly a good idea.

                  Have you programmed in Java? For the most part, buffer overflows and pointer dereferencing issues are dealt with by making the concepts impossible. It's like what they're trying to do with newspeak in 1984, where certain concepts become impossible to express because they're redefining the language to exclude them, except it's a programming language so it's not, actually, evil!

                  While there are some additional run-time checks compiled into Java binaries, for the most part the key concepts are dealt with by providing alternative methods of doing the things most programmers currently do using pointers. It's also worth noting that the same run-time checks are manually inserted by good C programmers - for example, we use strncpy rather than strcpy because the latter, while free of run-time checks and thus "faster", is a common vector for buffer overflows.

                  Java itself isn't bad in terms of performance. If you have any doubts about that, download Jake2 and have a play with it. It's Quake 2, written in Java, and when I tried it, the framerate was close (as in within 5%) of the C equivalent. This is on both a PPC OS X Mac and a GNU/Linux laptop, running the latest (JIT) JREs for both.

                  Finally, bear in mind that part of the discussion here is about replacing GCC with PCC. The latter is known to generate slower code, and many people are jumping on here saying that this is acceptable because the code might be more reliable. Consider, for a moment, the implications of that: people are happy about switching to a slower implementation of a language because of a perceived improvement in code reliability, but not to a language that's nearly as fast (doesn't have to be Java either, could equally be Modula 2 or some other managed language that doesn't have the flaws of C yet is equally procedural) and is guaranteed more reliable (no perceptions about it!)

                  One of the sibling posts also says OpenBSD is evolutionary rather than revolutionary, and I take that point. But I don't agree with you about the performance issues or the fact that some bugs will exist anyway. Just because older versions of Java were interpreted and chronically slow doesn't mean that they're representative of what we can get out of these types of technology.

                  Gosling's a genius, it's just a shame Sun marketed Java as a cross platform environment rather than a fix to what's wrong about modern programming.

                  1. By Anonymous Coward (84.221.198.75) on

                    > For the most part, buffer overflows and pointer dereferencing issues are dealt with by making the concepts impossible.

                    Every error remain possible no matter which programming language do you use. For each bugged C program you can build a bugged java program by encoding the C program in a String and then pass that String to a C interpreter written in Java. The only difference is that the bug will have different effects because of runtime checks.

                    > It's also worth noting that the same run-time checks are manually inserted by good C programmers - for example, we use strncpy rather than strcpy because the latter, while free of run-time checks and thus "faster", is a common vector for buffer overflows.

                    There is not a general answer there. Sometimes peoples want more runtime checks and sometimes they want more speed.

                    > Java itself isn't bad in terms of performance. If you have any doubts about that, download Jake2 and have a play with it. It's Quake 2, written in Java, and when I tried it, the framerate was close (as in within 5%) of the C equivalent. This is on both a PPC OS X Mac and a GNU/Linux laptop, running the latest (JIT) JREs for both.

                    Checking bounds and keeping objects that we know at compile time to be already unreachable has a cost. I'm not sure you can always ignore it.

                    > Finally, bear in mind that part of the discussion here is about replacing GCC with PCC. The latter is known to generate slower code, and many people are jumping on here saying that this is acceptable because the code might be more reliable. Consider, for a moment, the implications of that: people are happy about switching to a slower implementation of a language because of a perceived improvement in code reliability, but not to a language that's nearly as fast (doesn't have to be Java either, could equally be Modula 2 or some other managed language that doesn't have the flaws of C yet is equally procedural) and is guaranteed more reliable (no perceptions about it!)

                    I'm speaking there becose i'm interested in PCC too and i'm willing to pay a little to have more reliable programs. Still i don't like Java as a general purpose programming language becose it does force you to not do certain optimizations even when you are 100% sure they are ok.


                  2. By Anonymous Coward (74.115.21.120) on

                    > Java itself isn't bad in terms of performance.

                    No, languages don't have performance characteristics, implimentations do. And every existing implimentation of java is much slower than C.

                    > If you have any doubts about that, download Jake2 and have a play with it. It's Quake 2, written in Java, and when I tried it, the framerate was close (as in within 5%) of the C equivalent.

                    That's what happens when 90%+ of the time is spent in C libraries. Try something that doesn't rely almost entirely on the speed of your video card, its drivers, and your opengl implimentation (which is in C).

                    > Gosling's a genius, it's just a shame Sun marketed Java as a cross platform environment rather than a fix to what's wrong about modern programming.

                    No he isn't, and java does nothing to fix anything that is wrong with modern programming. Java is a huge quantum leap backwards from languages that came before it. The only language you could possibly argue that java "fixes" is C++. But of course: "Comparing java and C++ is like comparing the taste of tree bark and grasshoppers".

                  3. By Anonymous Coward (83.138.189.76) on

                    > Gosling's a genius, it's just a shame Sun marketed Java as a cross platform environment rather than a fix to what's wrong about modern programming.

                    Firstly, it's the cross platform nature that gives Java it's security, GCJ compiled binaries from Java source are not as secure.

                    Secondly, C is really a cross platform assembler, thats why everyone writes all the low level code in C and assembler. You want full access to the hardware, that is what kernels are for. You do not want some language jumping in and preventing you from doing that. You don't. Really.

              2. By Pierre Riteau (82.65.103.95) on

                > > > Here's a more interesting idea than PCC. Has anyone on the OpenBSD team thought, given OpenBSD's traditional solid code bias, about creating a C derived language that provides the managed environment needed to avoid corrupted pointers, stacks, etc, without the overheads of Java? Something that the kernel could easily be converted over to?
                > >
                > > Hmmm... are you serious? Sorry, it makes no sense.
                >
                > What, exactly, makes no sense? You're not seriously advocating pointers, stack overflows, etc, are you?
                >
                > I don't really understand what you believe makes no sense.

                Remember that OpenBSD is evolution, not revolution.
                I think those features would belong in a research kernel developped with a research language, not in OpenBSD.

              3. By Igor Sobrado (sobrado) sobrado@ on

                > > Hmmm... are you serious? Sorry, it makes no sense.
                >
                > What, exactly, makes no sense? You're not seriously advocating pointers, stack overflows, etc, are you?
                >
                > I don't really understand what you believe makes no sense.

                OpenBSD is a BSD operating system; we cannot split this operating system from the C programming language. You are proposing developing a Java-like language (let us call it "D") and rewriting OpenBSD using that language. It does not make sense at all.

                The C language and the Unix (and Unix-like) operating systems -on any of its flavors- have a parallel evolution. We cannot conceive a Unix(-like) operating system that is not C-based in the same way C will not make sense if it drops its Unix roots.

                The Unix(-like) operating systems and the C language are evolving together. The strongest feature of C is that it has been developed to support Unix; in the same way, the strongest feature of Unix is that it is based on C.

                On the other hand, Pierre Riteau has a very good point. OpenBSD is based on evolution, not revolution. We trust on fixing bugs, good programming practices, and code auditing. That is the reason OpenBSD is, probably, the most secure and reliable operating system right now.

          2. By Anonymous Coward (90.199.221.46) on

            > I don't doubt that bugs exist, but people here are even posting examples
            > of things that clearly aren't issues: "-O3" may break your code, but
            > everyone knows that, that's why it's a non-default option that only
            > people who know what they're doing use.

            If GCC generates correct code without -O3 but generates code that fails with -O3 then the O3 option is broken, no? It might work enough that it doesn't often fail but that doesn't make it correct.

            1. By Anonymous Coward (66.32.106.126) on

              > > I don't doubt that bugs exist, but people here are even posting examples
              > > of things that clearly aren't issues: "-O3" may break your code, but
              > > everyone knows that, that's why it's a non-default option that only
              > > people who know what they're doing use.
              >
              > If GCC generates correct code without -O3 but generates code that fails with -O3 then the O3 option is broken, no? It might work enough that it doesn't often fail but that doesn't make it correct.

              Nope. -O3 usually fails because the programmer is doing things involving multithreading or direct hardware access, the optimizations that are being performed are reasonable if the compiler is assuming otherwise.

              Do you really want certain optimizations to be unavailable simply because a programmer *might* be doing things that would break those optimizations, or do you want them to be available, but optional and not enabled by default?

              Most of us would go for the latter. We'd like the benefits of something when those benefits can be realized.

              1. By Anonymous Coward (193.200.150.45) on

                > Do you really want certain optimizations to be unavailable simply because a programmer *might* be doing things that would break those optimizations, or do you want them to be available, but optional and not enabled by default?

                Yes. If it is broken do not make it available. Gcc -O3 suck peanuts through a straw out of a turd.

              2. By Anonymous Coward (128.151.69.110) on

                > Nope. -O3 usually fails because the programmer is doing
                > things involving multithreading or direct hardware access,
                > the optimizations that are being performed are reasonable
                > if the compiler is assuming otherwise.

                Isn't that what the ``volatile'' keyword was created for? If GCC emits code that is broken for concurrent access of something marked "volatile", that's a bug.

                1. By art (213.56.159.23) on

                  > Isn't that what the ``volatile'' keyword was created for? If GCC emits
                  > code that is broken for concurrent access of something marked
                  > "volatile", that's a bug.

                  Yes, volatile is supposed to do something. In GCC sometimes it doesn't. Yes, this has led to numerous bugs. Now, can we please stop speculating?

                  1. By Anonymous Coward (69.207.171.114) on

                    > Yes, volatile is supposed to do something. In GCC sometimes
                    > it doesn't. Yes, this has led to numerous bugs. Now, can we
                    > please stop speculating?

                    Well, someone else said something very vague about a GCC bug. So I asked about volatile. How is this "speculating"?

                2. By Anonymous Coward (83.138.189.76) on

                  > Isn't that what the ``volatile'' keyword was created for? If GCC emits code that is broken for concurrent access of something marked "volatile", that's a bug.

                  Volatile is a false promise, ask Linus.

          3. By Anonymous Coward (65.34.99.75) on

            > As I said above, I'm hoping that C will start to die out once GCJ reaches a usability threshold. I love the language, I really do, but 99% of the problems with security and stability in computing today come from the fact that C has substantial design flaws inherent in its unmanaged design.

            If all that stuff was so important, why aren't we using Ada already? Do you really think Java can hold a candle to it in those areas you mention?
            Anyway nobody really gives a shit about Java, except PHB and some poor fools who are forced to use it to earn a living. It's a language with no soul, that was born directly in hell (Sun marketing).

          4. By Anonymous Coward (85.195.119.14) on

            JAVA? BWAHAHAHAHAHAHAHAHAHHAHAHAHAHA

            Java is the most retarded shit that has happened to the computer scene in a decade. Anyone saying anything java + kernel is completely retarded; not a little bit, very much retarded.

            I'll take a wild guess on how many drivers and other hardware code you have written. NULL!

            1. By Anonymous Coward (128.151.69.110) on

              My two points to contribute to this discussion:

              1. I think everyone will agree that the guy floating the "OS in Java" concept here is not in line with OpenBSD's goals. He's trolling, and it's entirely inappropriate.

              2. HOWEVER, to play devil's advocate, there is some benefit to a JVM-based OS, since you could implement a JVM-based OS whereby task switching is done without switching page tables, thus you do not incur the TLB invalidation hit as in a conventional OS. Microsoft's Singularity research OS does something like this, using C#. Honestly it's a good idea, if all your applications happen to run in the JVM or CLI. That said, OpenBSD will never use a model like this, and I agree, it shouldn't.

              Bottom line: A bytecode language might actually have a place in OS, and it's silly to discount the idea entirely. It's been done before, and surely it will be done in the future. BUT, I agree, it's even more ridiculous to expect that all conventional operating systems be rewritten in Java or C# simply because it's the latest buzz.

              1. By Anonymous Coward (76.10.128.247) on

                > 2. HOWEVER, to play devil's advocate, there is some benefit to a JVM-based OS, since you could implement a JVM-based OS whereby task switching is done without switching page tables, thus you do not incur the TLB invalidation hit as in a conventional OS.

                I've thought about this in the past, although I'm the first to admit I'm far from understanding I need to. However, doesn't this require running everything within one process? This seems like a bad idea because you've only got 4G of virtual memory for all processes then (even if you don't use it all, I'm sure it would make fragmentation a problem; how many stacks would you need?). And even though you theoretically don't need the protection you get from virtual memory addressing, it seems silly to just throw it away. Eventually there would be processors designed to support various aspects of managed operating systems and you'd end up re-inventing everything. You'd be in the same place with nothing to show for it but a different language syntax.

                Now, I like java and I think some of aspects of having everything be managed and all virtual-like are cool, but the more I learn about unix, the more it depresses me how much java needlessly reinvents.

                1. By Anonymous Coward (69.207.171.114) on

                  > However, doesn't this require running everything within one process?
                  > This seems like a bad idea because you've only got 4G of virtual memory
                  > for all processes then

                  Well, there is nothing that says the JVM has to store handles to objects as pointers and nothing more... It could be stored as some sort of [more than 32 bit wide] data structure which refers to something that could either be in RAM or swapped to disk. Or a key to such a data structure.

                  When you move to a platform with 64-bit address space I suppose this is less of an issue.

          5. By Anonymous Coward (198.175.14.5) on


            > BTW everyone keeps posting how bug ridden GCC is. You know, if it was that bad, I suspect there really would have been a serious effort to replace it by now, rather than one just starting. I've never had a problem with it.

            Maintaining GCC, keeping it in sync with the various SSP patches, supporting all the OpenBSD architectures, and producing bug free code have been very large tasks that have taken up enormous amounts of developer time. People have spent days and weeks tracking down bugs that were actually GCC bugs. Unless you've been very close to the development, you wouldn't see any of this. It would be nice if people could spend the same time working on a better compiler instead of fighting with GCC. Fighting GCC is frustrating, making some better is not.

        2. By Igor Sobrado (sobrado) sobrado@ on

          > Don't misunderestimate the importance of a fast compiler. Development time spent waiting for the compiler is completely wasted. The faster the compiler, the faster developers can go back to coding, testing, or debugging. Five to ten times as many snapshots can be created for wider user testing. Broken trees can be detected sooner.

          There are important advantages for non-developers too. Smaller and faster compilers are valuable when applying patches on low-end architectures or underpowered embedded systems.

          > As an OpenBSD developer, I do not see gcc's multiple language support as a feature. Unlike you, I do see pcc's speed as much needed functionality.

          A fast, small, well designed, maintainable and standards compliant C compiler fits much better on the goals of most software projects. It is something I had been awaiting for years too.

          Igor.

      2. By Gordon Willem Klok (68.148.17.121) gwk@gwk.ca on http://www.gwk.ca

        >
        > It's fast, but only because it doesn't do the same level of optimizations that GCC does, nor have the same open architecture that allows for multiple language support.
        >

        Conversely GCCs multiple language support makes optimization very complicated. GCC produces the slowest code of any of the compilers with any significant "market share" on the x86 architecture e.g. Microsoft's Visual C++, Portland group compilers, or Intel compilers. I don't have any first hand experience with other compilers for non x86 architectures but I would be shocked if this observation did not hold for these as well.

        > In the long term, projects like GCJ may well help make C itself less relevant. That's still a controversial thing to say

        Not controversial, it is a silly thing to say. Massive rewrites are foolhardy, replacing millions of lines (something like 10 in the case of OpenBSD) of code that has been extensively tested and debugged over a period of ten years (and in some cases more than 20 years). Certainly java would prevent some of the bugs that have already been corrected in the C code from reappearing in a rewrite however its not a magic bullet, there will be tonnes and tonnes and tonnes of bugs that were already corrected in the existing code base or newly introduced that will have to be tracked down and corrected.

        And all that for what exactly? What do we really gain?

        >- there are many people who've convinced themselves that it's impossible to write something like an operating system kernel in a managed language.

        Perhaps something like Cyclone might be a good choice for systems programing in a quasi managed language, what should be acutely clear is that JAVA IS NOT A GOOD CHOICE. Part of the reason C was so successful in this space was that truth being told there is fairness in its being called a portable assembly language, a kernel written in java would require huge amounts of assembly language for each architecture it would support, and if you think C programming is error prone, writing large chunks of code in assembler is far worse (and a 30+ year step backward in operating system development)

        I would go even farther and claim that the OOP paradigm which java subscribes to but executes very poorly is utterly flawed development model for kernels, just by pointing to Darwin perhaps the ugliest, most convoluted and slow kernel in wide spread use.

      3. By Anonymous Coward (74.115.21.120) on

        > In the long term, projects like GCJ may well help make C itself less relevant. That's still a controversial thing to say - there are many people who've convinced themselves that it's impossible to write something like an operating system kernel in a managed language. I suspect as time goes by, and the benefits of such an approach become more obvious, that view will slowly disappear. Requires someone to be brave and develop the code though in the face of certain ridicule.

        You're an idiot, plain and simple. A managed language is not needed to gain security, any high level language will do fine. And if you want real security, you need formal verification, not just removal of functionality.

  2. By Anonymous Coward (85.97.106.190) on

    Will there be a "p++"? Otherwise, how will groof(1) be compiled? A "p++" may also help compiling many ports like firefox faster.

    1. By Anonymous Coward (85.97.106.190) on

      > Will there be a "p++"? Otherwise, how will groof(1) be compiled? A "p++" may also help compiling many ports like firefox faster.

      I meant groff(1)

    2. By Anonymous Coward (74.13.45.175) on

      > Will there be a "p++"? Otherwise, how will groof(1) be compiled? A "p++" may also help compiling many ports like firefox faster.

      One step at a time sonny, let's let them make pcc actually work first, it's only supporting most of c99, there is still more of c itself to support yet, so let's not pester developers about the bothersome c++.

      1. By Anonymous Coward (66.92.146.186) on

        > > Will there be a "p++"? Otherwise, how will groof(1) be compiled? A "p++" may also help compiling many ports like firefox faster.
        >
        > One step at a time sonny, let's let them make pcc actually work first, it's only supporting most of c99, there is still more of c itself to support yet, so let's not pester developers about the bothersome c++.

        hopefully someones reading this ...

        according to j. sherril on the df lists lately,
        the heirloom doctools (cddl licensed sun 'original' troff)
        works pretty well at building the openbsd manpages,
        but not so much the freebsd/dfbsd manpages (macros have diverged)

        switching to these, while not being purely bsd licensed, is a step better
        (cddl is more 'lgpl' like than anything else)

        perhaps that's a direction to take..
        plus is 'the original unix troff'..

        in any case.. hope someone sees this :)

        1. By Anonymous Coward (74.13.45.175) on

          > hopefully someones reading this ...
          >
          > according to j. sherril on the df lists lately,
          > the heirloom doctools (cddl licensed sun 'original' troff)
          > works pretty well at building the openbsd manpages,
          > but not so much the freebsd/dfbsd manpages (macros have diverged)
          >
          > switching to these, while not being purely bsd licensed, is a step better
          > (cddl is more 'lgpl' like than anything else)
          >
          > perhaps that's a direction to take..
          > plus is 'the original unix troff'..
          >
          > in any case.. hope someone sees this :)

          The CDDL is a mess, hardly better than the GPL, just differently bothersome. It would probably make more sense to simple attempt to draw from the Caldera released AT&T code that is under the 4-clause BSD and update it.

          1. By Anonymous Coward (2001:6f8:94d:4:2c0:9fff:fe1a:6a01) on

            > bothersome. It would probably make more sense to simple attempt to
            > draw from the Caldera released AT&T code that is under the 4-clause
            > BSD and update it.

            While it does, the code from _that_ source is illegible, compare
            for yourself (oh okay, C++ groff is illegible too):
            | http://cvs.mirbsd.de/src/usr.bin/oldroff/

            I have got sources from ditroff from a time before the USA signed
            the Berne convention, without copyright notices attached, but Theo
            told me that, since we're not USA citizens, we cannot legally use
            this code. I tried to mail Brian Kernighan about it, but never got
            an answer (he is - supposedly - the author of ditroff).

            Anyway, the version of nroff shown above has been hacked to build
            all but the terminfo(5) manual page (too many diversions for tbl)
            fine, but still has a few issues on sparc and less on i386 (some
            line breaks are just not emitted). Good luck fixing that though...

  3. By Arthur Dent (87.194.37.218) on

    Looks like there is some OpenBSD-related activity on the Tendra front as well:

    From: http://www.tendra.org/

    2007-09-15: We've grown a few more developers! Of particular interest: Tobias has been working on supporting OpenBSD (especially to allow TenDRA to build using its native make), and Kevin developing more expressive features for Lexi.

    1. By Chris Lattner (70.91.206.190) on http://nondot.org/sabre

      > Looks like there is some OpenBSD-related activity on the Tendra front as well:
      >
      > From: http://www.tendra.org/
      >
      > 2007-09-15: We've grown a few more developers! Of particular interest: Tobias has been working on supporting OpenBSD (especially to allow TenDRA to build using its native make), and Kevin developing more expressive features for Lexi.
      >


      You should also check out LLVM (http://llvm.org) and the new C front-end being developed: http://clang.llvm.org . Both are BSD licensed and the LLVM optimizer/backend produces *better* code than GCC in many cases. The new C front-end is not up to PCC yet, but it is moving very quickly.

      -Chris

      1. By Anonymous Coward (18.243.2.53) on

        > > Looks like there is some OpenBSD-related activity on the Tendra front as well:
        > >
        > > From: http://www.tendra.org/
        > >
        > > 2007-09-15: We've grown a few more developers! Of particular interest: Tobias has been working on supporting OpenBSD (especially to allow TenDRA to build using its native make), and Kevin developing more expressive features for Lexi.
        > >
        >
        >
        > You should also check out LLVM (http://llvm.org) and the new C front-end being developed: http://clang.llvm.org . Both are BSD licensed and the LLVM optimizer/backend produces *better* code than GCC in many cases. The new C front-end is not up to PCC yet, but it is moving very quickly.
        >
        > -Chris

        Well I wonder why all those "kind of" BSD-Licensed Projects can't work together then they would have a great compiler wich could compete with GCC (wich si realy fucked up, just to mention the padding wich changes almost by any new major release..sucks..)

        1. By Anonymous Coward (74.13.45.175) on

          > Well I wonder why all those "kind of" BSD-Licensed Projects can't work together then they would have a great compiler wich could compete with GCC (wich si realy fucked up, just to mention the padding wich changes almost by any new major release..sucks..)

          Different strokes for different folks, there are various designs for compilers, perhaps differences of opinion on what makes a compiler good arise.

      2. By Anonymous Coward (82.224.188.215) on

        Yes, LLVM is way more promising than pcc IMHO.

  4. By Anonymous Coward (24.22.214.92) on

    Perhaps it can be called OpenCC to fit the usual Open* naming scheme? :P

    1. By Anonymous Coward (74.13.45.175) on

      > Perhaps it can be called OpenCC to fit the usual Open* naming scheme? :P

      NetBSD developers started it, so PCC for portable seems good enough.

  5. By Anonymous Coward (208.191.177.19) on

    Of course statements like "GCC is buggy" can be opinion as easily as fact, but I wonder why it hasn't been superseded already if it is so bad. I'm not defending it, I'm just curious, as one good thing about F/LOSS is that software natural selection works much more efficiently than with, say, Microsoft.

    As long as everything works, I don't care what compiler they use. But trying to maintain ease and compatibility of porting applications might get a bit sticky.

    1. By Ray Percival (sng) on http://undeadly.org/cgi?action=search&sort=time&query=sng

      > Of course statements like "GCC is buggy" can be opinion as easily as fact, but I wonder why it hasn't been superseded already if it is so bad. I'm not defending it, I'm just curious, as one good thing about F/LOSS is that software natural selection works much more efficiently than with, say, Microsoft.
      >
      > As long as everything works, I don't care what compiler they use. But trying to maintain ease and compatibility of porting applications might get a bit sticky.


      The license catfight is over. Go back to slashdot now, please. Just a hint here, F/LOSS is a GNUism. We don't really think that way here.

      1. By Anonymous Coward (208.191.177.19) on

        Oh well. I guess I owe you a thank you for reminding me what a waste of time it is posting here.

      2. By autocrat (69.77.171.215) on

        > Just a hint here, F/LOSS is a GNUism. We don't really think that way here.


        Collective groupthink appears to be alive and healthy.

        It's strange, you trumpet this almost as though you're _glad_ to be a participant of the feeble-minded march-to-the-beat-of-a-single-drummer posse.

        Do yourself, and others, a big favor - speak for yourself.

        1. By Anonymous Coward (70.173.172.228) on

          > > Just a hint here, F/LOSS is a GNUism. We don't really think that way here.
          >
          >
          > Collective groupthink appears to be alive and healthy.
          >
          > It's strange, you trumpet this almost as though you're _glad_ to be a participant of the feeble-minded march-to-the-beat-of-a-single-drummer posse.
          >
          > Do yourself, and others, a big favor - speak for yourself.
          >
          >

          says the guy using the term "F/LOSS".

          1. By Anonymous Coward (208.191.177.19) on

            > says the guy using the term "F/LOSS".

            Actually, no, it wasn't, though I agree with him/her. I'm sorry I'm not as hip as you and Mr. Percival on the vernacular. I asked what I thought was a legitimate question. Care to suggest an alternative term, one that doesn't offend your sensibilities (Ray, you are welcome to chime in here, too)?

    2. By Anonymous Coward (142.205.213.176) on

      > Of course statements like "GCC is buggy" can be opinion as easily as fact, but I wonder why it hasn't been superseded already if it is so bad. I'm not defending it, I'm just curious, as one good thing about F/LOSS is that software natural selection works much more efficiently than with, say, Microsoft.
      >
      > As long as everything works, I don't care what compiler they use. But trying to maintain ease and compatibility of porting applications might get a bit sticky.

      It's popular for the same reason Microsoft is popular. Human nature. It's the same reason companies pour money into horribly coded, proprietary Unix/Linux nightmares, which require months of testing just to implement a security patch (Can't have downtime, and all that proprietary code doesn't update together)

      It's the current standard. The companies have latched onto it, and companies are notorious for caring more about the bottom line then quality implementations. In their situation I would do the same thing, which is continue throwing my support behind GCC. Their job is to care about money. GCC works. Perhaps not well, perhaps slowly..(see Marc Espie's post, above) but it still works, and it makes no economic sense to start a new project.

      It's why I use OpenBSD, because I find Linux is also filled with such commercial nonsense, which only bogs down development and hinders the production of quality code.

      Don't mind the zealots, they're out in force on both sides.

      1. By Anonymous Coward (142.205.213.176) on

        > > Of course statements like "GCC is buggy" can be opinion as easily as fact, but I wonder why it hasn't been superseded already if it is so bad. I'm not defending it, I'm just curious, as one good thing about F/LOSS is that software natural selection works much more efficiently than with, say, Microsoft.
        > >
        > > As long as everything works, I don't care what compiler they use. But trying to maintain ease and compatibility of porting applications might get a bit sticky.
        >
        > It's popular for the same reason Microsoft is popular. Human nature. It's the same reason companies pour money into horribly coded, proprietary Unix/Linux nightmares, which require months of testing just to implement a security patch (Can't have downtime, and all that proprietary code doesn't update together)
        >
        > It's the current standard. The companies have latched onto it, and companies are notorious for caring more about the bottom line then quality implementations. In their situation I would do the same thing, which is continue throwing my support behind GCC. Their job is to care about money. GCC works. Perhaps not well, perhaps slowly..(see Marc Espie's post, above) but it still works, and it makes no economic sense to start a new project.
        >
        > It's why I use OpenBSD, because I find Linux is also filled with such commercial nonsense, which only bogs down development and hinders the production of quality code.
        >
        > Don't mind the zealots, they're out in force on both sides.

        A little clarification I should have made: The security patch testing issue is something I experience with things like AIX. In fairness, even commercial Linux makes it generally easy to upgrade. Unless, of course, you're running proprietary Linux software, then the testing component must be added back in. Unless you're lucky enough to have a vendor that updates their code often.

    3. By Bob Beck (129.128.11.43) beck@openbsd.org on


      > As long as everything works, I don't care what compiler they use. But trying to maintain ease and compatibility of porting applications might get a bit sticky.

      What, like the assumption that "all the world is GCC"

      Try using another compiler (like a commercial one on linux) and you
      run into this everywhere. GCC == C + Gccisms.

      it's as wrong as the "all the world is Loonix" assumption that apps make.

      I get real tired of portability being used as an argument against doing something good - especially when most of the "portability" issues come from apps that assume all the world is Linux and GCC - then spend 10 times the amount of time they spend actually compiling the package running squiddy little test programs in autoconf/configure - then break when you
      aren't running gcc on linux, because for all the autoconf crap, they've never been written to deal with anything else.




      1. By corey (ex-AC) (208.191.177.19) on

        I don't disagree. I like OpenBSD better than Linux, I feel it's easier to get it to do what I want, but I still use Linux too (and Windows, for thaat matter) because some of the apps I use are written on and for those OSes.

        I don't do much C programming, and so I never presumed to question why some of these Linux apps, particularly those that did not have to use any Linux-specific features, couldn't be made to run easily on the BSDs or commercial Unices. Maybe I should have. In any case, anything that helps you and the other OpenBSD devs improve OpenBSD is worthy of consideration.

        Thanks for taking the time to answer my question.

  6. By Leonardo Rodrigues (201.88.84.254) on

    Enlighten me please...

    Does that mean I'll be able to compile C99 code?

  7. By Anonymous Coward (70.54.53.132) on

    Tiny C Compiler ftw!

    http://fabrice.bellard.free.fr/tcc/

    (unsure of the license)

    try this:
    echo -e '#include <stdio.h>\nint main (){ puts("Hello World!"); }'| tcc -run /dev/stdin

    1. By Anonymous Coward (74.13.45.175) on

      > Tiny C Compiler ftw!
      >
      > http://fabrice.bellard.free.fr/tcc/
      >
      > (unsure of the license)
      >
      > try this:
      > echo -e '#include <stdio.h>\nint main (){ puts("Hello World!"); }'| tcc -run /dev/stdin

      GPL, says so on the page you list.

      1. By Anonymous Coward (65.87.143.54) on

        > > Tiny C Compiler ftw!
        > >
        > > http://fabrice.bellard.free.fr/tcc/
        > >
        > > (unsure of the license)
        > >
        > > try this:
        > > echo -e '#include <stdio.h>\nint main (){ puts("Hello World!"); }'| tcc -run /dev/stdin
        >
        > GPL, says so on the page you list.

        No, LGPL, as it says on the page you supposedly read.

        1. By Chris (24.76.100.162) on

          > > > Tiny C Compiler ftw!
          > > >
          > > > http://fabrice.bellard.free.fr/tcc/
          > > >
          > > > (unsure of the license)
          > > >
          > > > try this:
          > > > echo -e '#include <stdio.h>\nint main (){ puts("Hello World!"); }'| tcc -run /dev/stdin
          > >
          > > GPL, says so on the page you list.
          >
          > No, LGPL, as it says on the page you supposedly read.

          Think about why the difference doesn't matter in this case. Enlightenment can be yours.

          1. By Anonymous Coward (80.108.103.172) on

            > > > > Tiny C Compiler ftw!
            > > > >
            > > > > http://fabrice.bellard.free.fr/tcc/
            > > > >
            > > > > (unsure of the license)
            > > > >
            > > > > try this:
            > > > > echo -e '#include <stdio.h>\nint main (){ puts("Hello World!"); }'| tcc -run /dev/stdin
            > > >
            > > > GPL, says so on the page you list.
            > >
            > > No, LGPL, as it says on the page you supposedly read.
            >
            > Think about why the difference doesn't matter in this case. Enlightenment can be yours.



            Stop bullshitting on him.
            You said its GPL license, but it is LGPL.
            YOU were wrong.

            Apologize before trying for the "enlightenment" approach!

            1. By Anonymous Coward (74.13.45.175) on

              > Stop bullshitting on him.
              > You said its GPL license, but it is LGPL.
              > YOU were wrong.
              >
              > Apologize before trying for the "enlightenment" approach!

              No, I said it was GPL, because I only skimmed real quick, they did not. I was wrong and they were not, they were a little smart mouthed, but not wrong. You on the other hand were, and should apologize to that anonymous poster for your behaviour.

            2. By Chris (24.76.100.162) on

              >
              > Stop bullshitting on him.
              > You said its GPL license, but it is LGPL.
              > YOU were wrong.
              >
              > Apologize before trying for the "enlightenment" approach!

              It's not so hard to look at the IP addresses.

    2. By Anonymous Coward (71.111.154.250) on

      Rob Landley has been working on a fork of tcc, as Bellard hasn't done much with it in a while.

  8. By Anonymous Coward (82.224.188.215) on

    Why, but why?

    pcc, tendra, tcc, etc. are funny projects, but they are way, way, way behind gcc when it comes to optimizations. gcc has received tons of substantial contributions from large companies, and I don't see how alternative free compilers can have any chance to compete with gcc nowadays. It's way too late.

    And gcc is not bug-free, but it has a huge community of users. I wouldn't trust a compiler that no one uses like pcc for any serious work.

    What the goal with importing pcc? Switching from gcc to compile everything in OpenBSD? OpenBSD is already slow compared to other free Unix-like systems, why slow it again by switching to a non-standard compiler that has no optimization that gcc has for 10 years?


    1. By Anonymous Coward (74.13.45.175) on

      You not notice how much slower gcc 4 is? gcc is not getting better, it's getting markedly worse. Platforms are being dropped and performance is being lost, gcc is not moving forward.

      1. By Anonymous Coward (82.224.188.215) on

        > You not notice how much slower gcc 4 is? gcc is not getting better, it's getting markedly worse. Platforms are being dropped and performance is being lost, gcc is not moving forward.

        Sorry, but no, I don't notice how much slower gcc 4 is. If you are a developper and you want to quickly compile your code for testing, use tcc or pcc. But what users want is fast compiled code, even if it originally took ages to compile.
        We're year 2007. The focus of today's compilers is to provice automatic code vectorisation. This is needed to use today's processors. This requires a lot of complex work. Intel and the gcc team have been working on this for years. I don't expect pcc, to be able to do this anytime soon. Why not help the LLVM project instead of regressing back to a compiler that was designed 35 years ago?

        1. By Nick Holland (68.43.113.17) nick@holland-consulting.net on http://www.openbsd.org/faq/

          > Sorry, but no, I don't notice how much slower gcc 4 is.

          You need to open your eyes then. Try a build on a Pentium 90. Or my AMD XP2700+.

          > If you are a developper and you want to quickly compile your code for
          > testing, use tcc or pcc. But what users want is fast compiled code,
          > even if it originally took ages to compile.

          Curiously, in spite of all these "optimizations" on the compiler, the only time performance changes in OpenBSD is when the OpenBSD developers change things. These "optimizations" seem to have no real-life benefit to users.

          It is like "gzip -9". "better" (if you consider 2% better) compression at an absurd cost. Huge penalty, virtually no gain (though some argue "any gain is gain!").

          Remember, computers aren't car racing. 1%, even 10% performance differences rarely matter to the user at the keyboard. It won't change the amount of work done at the end of the day. It probably won't even be noticed by the user.

          > We're year 2007. The focus of today's compilers is to provice
          > automatic code vectorisation. This is needed to use today's
          > processors. This requires a lot of complex work. Intel and the gcc
          > team have been working on this for years. I don't expect pcc, to be
          > able to do this anytime soon. Why not help the LLVM project instead
          > of regressing back to a compiler that was designed 35 years ago?

          Interesting that you say that.
          25 years ago, I used a compiler which produced a "hello world" program in 2k with no command line options. No external libraries. Completely free-standing. Copy that 2k file to a floppy, put it on any other machine of the same OS, and it would run. I'm sure you would call that compiler "unoptimized", and I'm sure that view would be shared by the vast majority of compiler "experts" out there. I call that the most optimized compiler I've ever seen.

          That compiler (BDS C for CP/M) shipped on a single 250k floppy disk, ran well off floppy, ran on single-digit MHz machines, and I never saw a bug in the compiler itself.

          Granted, there were some gotchas: this "unoptimized" compiler was very optimized for its environment: it was written in assembly (not self-hosting), it was a "tiny C" at best, and even then, altered to fit the environment, so perhaps it could be better called a "C-like assembly language for 8080", but still...after seeing a 2k "hello world" executable, I have difficulty talking about "optimized" compilers without laughing myself silly.

        2. By Arthur Dent (87.194.37.218) on

          > > You not notice how much slower gcc 4 is? gcc is not getting better, it's getting markedly worse. Platforms are being dropped and performance is being lost, gcc is not moving forward. > > Sorry, but no, I don't notice how much slower gcc 4 is. If you are a developper and you want to quickly compile your code for testing, use tcc or pcc. But what users want is fast compiled code, even if it originally took ages to compile. > We're year 2007. The focus of today's compilers is to provice automatic code vectorisation. This is needed to use today's processors. This requires a lot of complex work. Intel and the gcc team have been working on this for years. I don't expect pcc, to be able to do this anytime soon. Why not help the LLVM project instead of regressing back to a compiler that was designed 35 years ago? Well, what THIS user wants is CORRECT code which should theoretically mean that we users can use any reasonably standards-compliant compiler we want. If the OpenBSD developers want to use a compiler that increases their ability to produce correct code and that compiles fast, it means that they have more time to do many other things. We all win. What's so hard to understand about that? GCC is so linux-centric that it makes sense for the BSD community to have a compiler that does things the BSD way. This is not about licensing, but about different priorities.

          1. By Karl Sjödahl (Dunceor) on

            Damnit people, how hard is it to quote old posts correctly and use a newline at least? Damn hard to read.

            1. By couderc (213.41.184.19) on

              > Damnit people, how hard is it to quote old posts correctly and use a newline at least? Damn hard to read.

              It's a side effect of the infinite improbability drive :)

          2. By Anonymous Coward (151.188.18.58) on

            > GCC is so linux-centric that it makes sense for the BSD community to have a compiler that does things the BSD way. This is not about licensing, but about different priorities.
            >

            Oh my God, are you just *trying* to lie, or are you really that misinformed? GCC was never and is not "linux-centric", as you put it. The fact that it runs on a bunch of platforms (one of which is GNU/Linux on multiple CPU types, another of which is OpenBSD on multiple CPU types, another of which is Solaris, etc.) totally invalidates your premise. Hell, it's even found on Mac OS X! Instead of talking about a compiler "that does things the BSD way," whatever that is, how about joining the GCC dev team and submitting some patches where you think it is deficient?

            That said, I personally have no problem with PCC. To the contrary, I think it's a fine idea to have it, if it turns out to be good, and I wish its developers nothing but the best in their work. It's kinda like OpenOffice.org vs. KOffice; each has its strengths and I use both. Back in the bad old (proprietary) days, I'd compile apps with both Borland C and Watcom C, just to make sure I hadn't gotten lazy and used any funky compiler-specific optimizations. Worked out pretty well for me doing that.

            1. By Anonymous Coward (213.56.159.23) on


              > Instead of talking about a compiler "that does things the BSD way,"
              > whatever that is, how about joining the GCC dev team and submitting some > patches where you think it is deficient?

              Try it without working for a major linux vendor. Let us know how it went.

            2. By Marc Espie (163.5.254.20) espie@openbsd.org on


              > Oh my God, are you just *trying* to lie, or are you really that misinformed? GCC was never and is not "linux-centric", as you put it. The fact that it runs on a bunch of platforms (one of which is GNU/Linux on multiple CPU types, another of which is OpenBSD on multiple CPU types, another of which is Solaris, etc.) totally invalidates your premise. Hell, it's even found on Mac OS X! Instead of talking about a compiler "that does things the BSD way," whatever that is, how about joining the GCC dev team and submitting some patches where you think it is deficient?

              I am part of the GCC dev team, have been for a few years, and I have struggled over various issues.

              I can tell you it is somewhat linux-centric, and yes, it is fairly hard to get stuff in which doesn't fit within the GCC agenda, which is definitely *not* the OpenBSD agenda.

              We've been crying for years that GCC was getting too slow, among other things. We also do not like some of the inline `improvements' (specifically, the part that makes memfill vanish in crypto code).

              We're a definitive minority in there.

              I've more or less given up on GCC development, because it's too damn frustrating. It takes a lot of time to just keep up with the new versions. There are controversial decisions (for us) like insisting on gnu-make for the new version.

              I've spent quite a few hours making the libstdc++ recognize the little part of internationalization we had (it's all-or-nothing in their configure land). We haven't had any luck getting it to adopt strlcpy/strlcat (linux doesn't have them, so it must be garbage).

              All of this is solid fact. You just have to read through the gcc mailing-list archives to see my name, multiple times. Sometimes with code attached. Sometimes with hairy problems that no-one knows how to solve (and no-one cares, because it's not linux).

              So, shelve the Troll and give me some facts.

        3. By Anonymous Coward (87.194.37.218) on

          Sorry badly posted, reposting again...

          > > "Sorry, but no, I don't notice how much slower gcc 4 is. If you are a developper and you want to quickly compile your code for testing, use tcc or pcc. But what users want is fast compiled code, even if it originally took ages to compile."

          Well, what this user wants is correct code which should theoretically mean that we users can use any reasonably standards-compliant compiler we want. If the OpenBSD developers want to use a compiler that increases their ability to produce correct code and that compiles fast, it means that they have more time to do many other things.

          We all win.

          What's so hard to understand about that? GCC is so linux-centric that it makes sense for the BSD community to have a compiler that does things the BSD way. This is not about licensing, but about different priorities.

        4. By Anthony (198.53.149.206) on

          > We're year 2007. The focus of today's compilers is to provice automatic > code vectorisation. This is needed to use today's processors.

          Not really.

          Multiple cores and SIMD processing are responses to the inability to keep increasing the number of scalar instructions that can be shoved down a single pipeline. This number is still very much a bottleneck, particularly for servers, and the CPUs that do it the fastest (Core 2, POWER6, IA-64, etc) have a significant performance advantage over CPUs like Cell, that favor vector performance.

          Most OpenBSD machines spend most of their CPU time in kernelspace, in the network stack and PF. Vectorization doesn't help this at all.

      2. By Anonymous Coward (69.223.13.78) on

        > Platforms are being dropped

        this piece of the argument eludes me - OpenBSD has dropped a bunch of platforms (mips and arm, some 68K) over the years for their own
        "reasons" - probably the very same reason gcc dropped them - nobody
        (or not the right people) is/are interested.

        1. By Anonymous Coward (85.178.107.64) on

          > > Platforms are being dropped
          >
          > this piece of the argument eludes me - OpenBSD has dropped a bunch of platforms (mips and arm, some 68K) over the years for their own
          > "reasons" - probably the very same reason gcc dropped them - nobody
          > (or not the right people) is/are interested.
          >

          To drop a Architecture or to be forced to do so by others are different things.

        2. By Marc Espie (213.41.185.88) espie@openbsd.org on

          > > Platforms are being dropped
          >
          > this piece of the argument eludes me - OpenBSD has dropped a bunch of platforms (mips and arm, some 68K) over the years for their own
          > "reasons" - probably the very same reason gcc dropped them - nobody
          > (or not the right people) is/are interested.
          >

          One of the reasons for dropping these platforms is the GNU toolchain: either the new binutils/new GCC was flaky on them, or it was starting to take prohibitively long to compile stuff on these.

          I can with certainty, for instance, that the lack of speed of GCC was one of the nails in the coffin of the amiga port of OpenBSD (that, plus the fact my SCSI card was acting up a lot, the box was making too much noise for my small appartment at the time, and I got a faster amiga eventually... but somewhat too late).

          It takes an *interesting* state of mind to still be able to work with legacy architectures with `current' compiler technology...

          1. By Daniel Ouellet (66.63.10.94) daniel@presscom.net on

            I can with certainty, for instance, that the lack of speed of GCC was one of the nails in the coffin of the amiga port of OpenBSD (that, plus the fact my SCSI card was acting up a lot, the box was making too much noise for my small appartment at the time, and I got a faster amiga eventually... but somewhat too late). Interesting you say that. My favorite at the time, was Astec on Amiga. Sadly both are gone. Granted Astec wasn't open source however, but a very good and small compiler however.

            1. By Janne Johansson (193.11.27.146) jj@inet6.se on

              > I can with certainty, for instance, that the lack of speed of GCC was one of the nails in the coffin of the amiga port of OpenBSD (that, plus the fact my SCSI card was acting up a lot, the box was making too much noise for my small appartment at the time, and I got a faster amiga eventually... but somewhat too late).
              >
              > Interesting you say that. My favorite at the time, was Astec on Amiga. Sadly both are gone. Granted Astec wasn't open source however, but a very good and small compiler however.

              Then again, all system books had examples for which Lattice C (later SAS/C) was far better to use, since the differences made some stuff uncompilable on Aztec C.
              There is some similarities to "my kernel adapts to the gcc compiler" here.

    2. By Anonymous Coward (85.178.82.0) on

      > Why, but why?
      >
      > pcc, tendra, tcc, etc. are funny projects, but they are way, way, way behind gcc when it comes to optimizations. gcc has received tons of substantial contributions from large companies, and I don't see how alternative free compilers can have any chance to compete with gcc nowadays. It's way too late.
      >
      > And gcc is not bug-free, but it has a huge community of users. I wouldn't trust a compiler that no one uses like pcc for any serious work.
      >
      > What the goal with importing pcc? Switching from gcc to compile everything in OpenBSD? OpenBSD is already slow compared to other free Unix-like systems, why slow it again by switching to a non-standard compiler that has no optimization that gcc has for 10 years?

      Ok lets explain it pretty short:

      1. GCC is buggy (it is plain BUGGY and if u hit such a bug you need to
      make a work around or hope it will get fixed some day)
      2. GCC has "support"... it may "optimized" ya code to death (-O3 can
      cause corrupted code).. see 1.!
      3. IT's GPLed.. but Licenses away: GNU drops Architectures because
      of "reasons".
      So if you Provide OpenBSD for XYZ Arch and this gets droped by GCC
      because they claim "nobody uses it anymore" or something like this
      you're pretty fucked if you've still a XYZ-Arch somewhere
      I see no reasons in dropping a Architecture at all, let it stay
      unmaintained or so but with pretty "old" architectures there should
      be (should) less Bugs (wich is not the fact if you deal witht he GCC
      anyway... but hell)
      4. GCC is incompatible to itself (2.x -> 3.x -> 4.x)
      The Padding changed, other things changed as well

      Why not get rid? There TONS of Applications written for GCC wich may use "Workarounds" wich make the code more "incompatible" with other so called "C Compilers". Also extensions of the GCC are kinda problematic.

      Take a plain C99 Compiler and I bet most things may wouldn#t compile even they're written in "C". Just my guess right now...

      GCC is a mess...
      Of course a compiler can optimize the code (wich will be NECESSERY A LOT for the future! Multi-Core CPUs and so...) and be faster then GCC.

      I personaly hope Theo just stands up and does something about this fucked up situation. In my oppinion he could help to make such a project going on and get some attention or support (depends to relations to "sponsors" or so). I'm pretty sure if somebody hugs AMD they may would consider to support the compiler even.

      As far as I know GCC gets no real support from INTEL because INTEL loves its ICC. I'm not aware of any support from AMD either. But I'm not god so I can't know anything.

    3. By Anonymous Coward (82.19.71.57) on

      > OpenBSD is already slow compared to other free Unix-like systems

      Is it? It's not the fasted, but I wouldn't say it was slow.

    4. By Lars Hansson (bysen) on

      > Why, but why?

      http://undeadly.org/cgi?action=article&sid=20070915195203&pid=52

      As a side note, you do people complaining REALLY think the developers hasn't thought about this? It's not like they just went "Hey, lets import pcc. We dont knows how it works or anything but lets just do it. I've used it for 5 minutes and it compiles helloworld.c".

    5. By Anonymous Coward (208.152.231.254) on

      > What the goal with importing pcc? Switching from gcc to compile everything in OpenBSD? OpenBSD is already slow compared to other free Unix-like systems, why slow it again by switching to a non-standard compiler that has no optimization that gcc has for 10 years?
      >

      I suspect for the most part it's the usual license zealotry that seems to have reached a peak (or is that a nadir) over the last few months.

      If this really was about getting a faster, more reliable, compiler, that supports more architectures, an older version of GCC would have been forked. PCC is a particularly bad idea:

      - Poor separation of the front and back-ends means it's only ever going to be a C compiler, unlike GCC.
      - Ancient code, not even ANSI C99 level. By the time it's compliant, expect it to be a mess.
      - Speed of compilation at some cost: the compiler does almost no optimizations, not even the uncontroversial ones. Code generated is large and slow. Expect the number of supported architectures to be poor, not because it can't technically generate code for a particular target, but because the timings and size of the kernel would preclude it from running on anything useful.
      - Poor multiarchitecture support (unless you're limiting yourself to 1970s systems and ix86.) This will need to be added before it can be considered credible.

      I mean, that last one's the biggest joke. The complaint is that GCC doesn't support enough architectures, so you're switching to PCC? WTF?

      And why does GCC drop less popular architectures from time to time? Answer: only because nobody is volunteering to maintain them. So, of the two options:

      - Contribute to GCC by maintaining output options for architectures you want

      or

      - Modify an old, woefully outdated, compiler that barely supports most of the architectures you want to support them

      people are seriously picking the latter?

      The proponents of PCC here are following an agenda. It's nice to see antique code given a polish and made to work from time to time, but actually switching OpenBSD to this thing, as proposed here by numerous contributors, is so completely out of left field that I can only assume this is pretty much another salvo in the unnecessary war against the FSF.

      Bizarre.

      1. By djm (203.217.30.85) on

        > If this really was about getting a faster, more reliable, compiler, that supports more architectures, an older version of GCC would have been forked. PCC is a particularly bad idea:
        >
        > - Poor separation of the front and back-ends means it's only ever
        > going to be a C compiler, unlike GCC.

        wrong: there is a f77 frontend, but it needs a little work. Compare to the deliberate commingling in gcc's design, driven by FSF ideology to prevent proprietary pseudo-forks that reuse either the front or back ends.

        > - Ancient code, not even ANSI C99 level. By the time it's compliant,
        > expect it to be a mess.

        wrong again: it is mostly C99 already

        > - Speed of compilation at some cost: the compiler does almost
        > no optimizations, not even the uncontroversial ones. Code
        > generated is large and slow.

        Nobody is suggesting the generated code is as fast as gcc, but it has a great register allocator and already supports SSA.

        > Expect the number of supported
        > architectures to be poor, not because it can't technically generate
        > code for a particular target, but because the timings and size of
        > the kernel would preclude it from running on anything useful.

        Evidence for this? If the generated code is 50% slower than gcc, this doesn't *preclude* it from being useful. I think you are overstating your argument.

        > - Poor multiarchitecture support (unless you're limiting yourself to
        > 1970s systems and ix86.) This will need to be added before it can be
        > considered credible.

        Wow, wrong again: there are PPC and MIPS backends. The i386 backend is the *new* one, and took all of two days to write.

        > The proponents of PCC here are following an agenda. It's nice to see
        > antique code given a polish and made to work from time to time, but
        > actually switching OpenBSD to this thing, as proposed here by numerous
        > contributors, is so completely out of left field that I can only
        > assume this is pretty much another salvo in the unnecessary war
        > against the FSF.

        It is pretty amusing and hypocritical that you can use a series of untruths to support the assertion that people are following an agenda. It seems like you have made up arguments to support your criticism without having checked out pcc at all.

        The simple truth is that the OpenBSD developers want a clean and easy to hack complement to gcc. One day, it might replace gcc as the default compiler but there is a lot of work to do fast.

      2. By Lars Hansson (bysen) on

        > > What the goal with importing pcc? Switching from gcc to compile everything in OpenBSD? OpenBSD is already slow compared to other free Unix-like systems, why slow it again by switching to a non-standard compiler that has no optimization that gcc has for 10 years?
        > >
        >
        > I suspect for the most part it's the usual license zealotry that seems to have reached a peak (or is that a nadir) over the last few months.
        >
        > If this really was about getting a faster, more reliable, compiler, that supports more architectures, an older version of GCC would have been forked.
        > PCC is a particularly bad idea:

        It's interesting how peopel who aren't involved and mos certainly arent deveopers somehow always knows what the project and the developers need to do and what not to do.

        > - Poor separation of the front and back-ends means it's only ever going to be a C compiler, unlike GCC.

        So what? Who the hell needs stuff Fortran and Ada in *base*? What's good about tacking *other* languages onto a *C* compiler?
        Poor separation is also a problem with gcc, afaik.

        > - Ancient code, not even ANSI C99 level. By the time it's compliant, expect it to be a mess.

        It's awesome that you can predict the future. Seriously. Have you considered joining "Who wants to be a superhero"?

        Again, do you really think no one thought about this? Are you people so damn dense that you think it was imported on a whim? It's not like pcc is intended to replace gcc tomorrow, you know.

        > Bizarre.

        Not as bizarre as the backseat experts who always creep out of the woodwork as soon as some changes are ahead.

        Personally I don't give a fsck what compiler OpenBSD is using as long as it works. It's not like you cant install gcc from ports if you really want it.

      3. By art (213.56.159.23) on


        > I suspect for the most part it's the usual license zealotry that seems
        > to have reached a peak (or is that a nadir) over the last few months.

        Yes, of course. The fact that we've been looking for a new compiler since at least 1999 doesn't mean anything.

        > If this really was about getting a faster, more reliable, compiler,
        > that supports more architectures, an older version of GCC would have
        > been forked.

        Yes, of course, why didn't we think about it, we must be very stupid.

        > PCC is a particularly bad idea:
        >
        > - Poor separation of the front and back-ends means it's only ever
        > going to be a C compiler, unlike GCC.

        Yes and Linux will only ever run on i386.

        > - Ancient code, not even ANSI C99 level. By the time it's compliant,
        > expect it to be a mess.

        Oh my god! OpenBSD contains ancient code, let's just switch to Linux immediately.

        > - Speed of compilation at some cost: the compiler does almost no
        > optimizations, not even the uncontroversial ones. Code generated is
        > large and slow. Expect the number of supported architectures to be
        > poor, not because it can't technically generate code for a particular
        > target, but because the timings and size of the kernel would preclude
        > it from running on anything useful.

        Yes, I'm sure you've done all the tests to support those claims and you are a master computer engineer since you obviously know that compilation time is irrelevant.

        > - Poor multiarchitecture support (unless you're limiting yourself to
        > 1970s systems and ix86.) This will need to be added before it can be
        > considered credible.

        No shit?

        > Bizarre.

        I find your comment insightful, mr. armchair hacker.

      4. By Todd T. Fries (todd) todd@fries.net on http://todd.fries.net/

        > > What the goal with importing pcc? Switching from gcc to compile everything in OpenBSD? OpenBSD is already slow compared to other free Unix-like systems, why slow it again by switching to a non-standard compiler that has no optimization that gcc has for 10 years?
        > >
        >
        > I suspect for the most part it's the usual license zealotry that seems to have reached a peak (or is that a nadir) over the last few months.

        Hey, guess what. Licenze zealotry has nothing to do with the legal and moral fact that BSD licenced code cannot have its BSD license removed and the GPL attached in its place. Reyk's work was never dual licensed. You're free to choose GPL for your code, I'm free to choose BSD for mine. But I can't change your license for you, nor can you change my license for me. Simple as that!

    6. By Bob Beck (129.128.11.43) beck@openbsd.org on

      > Why, but why?
      >
      > pcc, tendra, tcc, etc. are funny projects, but they are way, way, way behind gcc when it comes to optimizations. gcc has received tons of substantial contributions from large companies, and I don't see how alternative free compilers can have any chance to compete with gcc nowadays. It's way too late.
      >
      > And gcc is not bug-free, but it has a huge community of users. I wouldn't trust a compiler that no one uses like pcc for any serious work.
      >
      > What the goal with importing pcc? Switching from gcc to compile everything in OpenBSD? OpenBSD is already slow compared to other free Unix-like systems, why slow it again by switching to a non-standard compiler that has no optimization that gcc has for 10 years?
      >
      >
      >

      Why but why?

      Linux, OpenBSD, FreeBSD, etc. are funny projects, but they are way, way, way behind Windows when it comes to Business Acceptance. Windows has recieved tons of substantial contributions from large dompanins, and I don't see how alternative free operating systems can have any chance to compete with Windows nowadays. It's way too late.

      And Windows is not bug-free but it has a huge community of users. I wouldn't trust an Operating system that no one uses like OpenBSD for any serious work.


      . . . .


      Baaaaaaaa!!! Please return to the flock from whence you came!


      1. By Anonymous Coward (71.242.103.24) on

        Yikes! for a minute there I thought you meant all that crap.... my sarcasm detector experienced a BSOD....

  9. By Anonymous Coward (80.229.163.35) on

    Although I respect every attempt to replace gcc I think there is only one serious project that will be able to do it (esp. now when many companies, Apple included, are behind it):

    http://llvm.org/

    Compared to this compiler technology everything else (esp. gcc) looks like a joke.

    1. By Anonymous Coward (68.100.130.1) on

      > Although I respect every attempt to replace gcc I think there is only one serious project that will be able to do it (esp. now when many companies, Apple included, are behind it):
      >
      > http://llvm.org/

      Yeah, too bad it's written in C++ :(

      But maybe if their C/C++/ObjC frontend (http://clang.llvm.org/) gets mature enough so it can compile itself, that could be the answer.

      Anyway, it may do more than pcc, but it is also much larger:

      $ du -sh llvm-2.0 pcc-0.9.8
      26.5M llvm-2.0
      1.4M pcc-0.9.8

      and that's without the clang frontend, which isn't even in a usable state yet.

      Still could end up smaller than gcc, and is probably more cleanly written. I don't know about the OBSD devs, but I'll certainly be keeping an eye on this project.

    2. By Todd T. Fries (todd) todd@fries.net on http://todd.fries.net/

      > Although I respect every attempt to replace gcc I think there is only one serious project that will be able to do it (esp. now when many companies, Apple included, are behind it):
      >
      > http://llvm.org/
      >
      > Compared to this compiler technology everything else (esp. gcc) looks like a joke.

      Nice technology, wrong license, move along now.

      1. By Anonymous Coward (88.153.148.5) on

        > > Although I respect every attempt to replace gcc I think there is only one serious project that will be able to do it (esp. now when many companies, Apple included, are behind it):
        > >
        > > http://llvm.org/
        > >
        > > Compared to this compiler technology everything else (esp. gcc) looks like a joke.
        >
        > Nice technology, wrong license, move along now.

        What's wrong with LLVM's license? It looks like a BSD License:
        http://llvm.org/releases/2.0/LICENSE.TXT

        1. By gwyllion (193.190.253.149) on

          > > Nice technology, wrong license, move along now.
          >
          > What's wrong with LLVM's license? It looks like a BSD License:
          > http://llvm.org/releases/2.0/LICENSE.TXT

          NCSA license is a combination of MIT/X11 and BSD license. See http://en.wikipedia.org/wiki/University_of_Illinois/NCSA_Open_Source_License

  10. By Brynet (Brynet) on

    I welcome this addition, A native BSD licenced C compiler is exactly what the BSD's need.

    Plus, Having it in the CVS tree will allow talented programmers to work on it.. adding features, optimizations and security audits that are common proactive for OpenBSD coders.

    I know of a few other BSD licenced compilers though.. just for kicks..
    http://nwcc.sourceforge.net/ - Several processor architectures..
    and
    http://tack.sourceforge.net/ - Also supports many architectures - The original Minix C compiler.

    As for a BSD licenced assembler that also accepts AT&T Syntax and Intel Syntax..
    http://www.tortall.net/projects/yasm/

    I'll surely submit patches, if I find any problems that is..

    1. By Brynet (Brynet) on

      > I welcome this addition, A native BSD licenced C compiler is exactly what the BSD's need.
      >
      > Plus, Having it in the CVS tree will allow talented programmers to work on it.. adding features, optimizations and security audits that are common proactive for OpenBSD coders.
      >
      > I know of a few other BSD licenced compilers though.. just for kicks..
      > http://nwcc.sourceforge.net/ - Several processor architectures..
      > and
      > http://tack.sourceforge.net/ - Also supports many architectures - The original Minix C compiler.
      >
      > As for a BSD licenced assembler that also accepts AT&T Syntax and Intel Syntax..
      > http://www.tortall.net/projects/yasm/
      >
      > I'll surely submit patches, if I find any problems that is..


      ***** common *practise* for OpenBSD coders.

      I need a drink lol..

  11. By Anonymous Coward (216.68.198.57) on

    Great, scratch an itch...
    OpenBSD and others are not leaving *anything* to granted, we all benefit. Otherwise, we would only have Microsoft, or still be throwing stones...
    Why wait for the legal world to rule a tainted law, like if GPL GCC has any use, then rest is GPL tainted, and not BSD? Crazy, but why wait X years for that environment, when it takes many years to build infrastruture?
    Why be dependent upon other stuff, Walmart got big by getting rid of all middlemen and tainted dependencies.
    All for GCC work, just happy that there are always options.

    1. By raw (84.135.111.155) on

      Sorry?

  12. By zyz (88.91.96.122) zenwalk31@gmail.com on

    GCC needs more competition for sure. I just don't think that focusing on a compilation speed for C compiler is the right move. By the time this one is usable GCC will have had an automatic code vectorization to better utilize all those x-core CPUs and maybe even to speed up the compilation itself. It takes years to create a good compiler - they should focus on building a very wide community of developers and testers and that's the field where Open/NetBSD guys typically don't make a good job.

    1. By Anonymous Coward (219.90.211.166) on

      $ du -s gcc-4.0.3 pcc-0.9.8
      169M    gcc-4.0.3
      1.8M    pcc-0.9.8
      
      * It already builds most of the userland.
      * There's much interest from all of the BSD camps.
      * Small code base means its more accessible to potential compiler hackers.
      * Small code base means bugs are found and fixed easier.
      * Pulling it into base garners interest from other users/devs (potential compiler hackers).

      How many people do you think are poring over the code right now? I know I am!

  13. By Anonymous Coward (213.118.238.47) on

    "... some were plain errors in the code that gcc ignores."
    excellent :-)

    1. By Anonymous Coward (2001:6f8:94d:4:2c0:9fff:fe1a:6a01) on

      > "... some were plain errors in the code that gcc ignores."
      > excellent :-)

      Too bad that NetBSDŽ guy didn't say _which_ ones they were.
      I for one am interested...

  14. By Bernd Schoeller (schoelle) bernd@fams.de on

    This is great news. GCC is a moving target, and they do not care.

    Lets give a small example from our own experience: We had major problems with GCC 4.x, because they just switched order of evaluating "target = source" (3.x: first the source expression is evaluated, then the target expression; 4.x: other way around). When complaining, the answer was just: "It is not specified in the C standard, so we can switch it whenever we want!". Do you want to build you tool chain on such a reliable compiler?

    I am looking forward to the integration of 'pcc' into standard OpenBSD, kicking GCC into the ports system, where it belongs. Keep up the good work!

    1. By Marc Espie (213.41.185.88) espie@openbsd.org on

      > This is great news. GCC is a moving target, and they do not care.
      >
      > Lets give a small example from our own experience: We had major problems with GCC 4.x, because they just switched order of evaluating "target = source" (3.x: first the source expression is evaluated, then the target expression; 4.x: other way around). When complaining, the answer was just: "It is not specified in the C standard, so we can switch it whenever we want!". Do you want to build you tool chain on such a reliable compiler?

      Now, you're an idiot, and the GCC people are right.

      The issues we have with GCC semantics moving around are actual issues.
      Bugs in the aliases analysis, for instance. Or packing changes. Or changes in inline assembly.

      But relying on evaluation order where it's not defined ?

      Please, do us a favor and go back to coding in Java.

      1. By Bernd Schoeller (schoelle) on

        > > This is great news. GCC is a moving target, and they do not care.
        > >
        > > Lets give a small example from our own experience: We had major problems with GCC 4.x, because they just switched order of evaluating "target = source" (3.x: first the source expression is evaluated, then the target expression; 4.x: other way around). When complaining, the answer was just: "It is not specified in the C standard, so we can switch it whenever we want!". Do you want to build you tool chain on such a reliable compiler?
        >
        > Now, you're an idiot, and the GCC people are right.

        Darn, Marc, why does the OpenBSD community always have to be so harsh, attacking people instead of arguments ... the second half of that sentence would have been sufficient.

        Did I say the GCC people were wrong? The C standards leave a lot of slack, everybody knows this. All I said that GCC is a moving target. Changing a very basic evaluation order without warning was an issue. Did you read my post?

        > But relying on evaluation order where it's not defined ?

        Relying on evaluation order in manually written code is bad code. But when you generate code using a moving garbage collector, things get tricky, and one needs to adapt to specific properties of the compiler outside of the C standard. If these properties change every couple of month, it is an issue, at least for us.

        1. By Marc Espie (213.41.185.88) espie@openbsd.org on

          > > > This is great news. GCC is a moving target, and they do not care.
          > > >
          > > > Lets give a small example from our own experience: We had major problems with GCC 4.x, because they just switched order of evaluating "target = source" (3.x: first the source expression is evaluated, then the target expression; 4.x: other way around). When complaining, the answer was just: "It is not specified in the C standard, so we can switch it whenever we want!". Do you want to build you tool chain on such a reliable compiler?
          > >
          > > Now, you're an idiot, and the GCC people are right.
          >
          > Darn, Marc, why does the OpenBSD community always have to be so harsh, attacking people instead of arguments ... the second half of that sentence would have been sufficient.

          > Did I say the GCC people were wrong? The C standards leave a lot of slack, everybody knows this. All I said that GCC is a moving target. Changing a very basic evaluation order without warning was an issue. Did you read my post?

          I'll bet the GCC people did not even realize they were changing the order. Heck, the evaluation order is not even arch-independent in GCC. Complaining about this makes no sense.

          > > But relying on evaluation order where it's not defined ?
          >
          > Relying on evaluation order in manually written code is bad code. But when you generate code using a moving garbage collector, things get tricky, and one needs to adapt to specific properties of the compiler outside of the C standard. If these properties change every couple of month, it is an issue, at least for us.

          You realize, of course, how many bad engineering decisions there are in that sentence ? You are relying on a *hack*. This is the kind of mistakes I outgrew back when I was a teenager. You really, really want to avoid undefined behavior like the plague, or you will have to stay locked in to a specific set of tools forever.

          I have absolutely no sympathy for this kind of lack of foresight. I've spent enough time trying to work around such idiotic decisions in various ports and other software. Sometimes, the easy path is not good enough. You really should think things through.

          1. By Bernd Schoeller (schoelle) on

            > I have absolutely no sympathy for this kind of lack of foresight. I've spent enough time trying to work around such idiotic decisions in various ports and other software. Sometimes, the easy path is not good enough. You really should think things through.

            Fair enough and valid argument. We had targeted many CC compilers, all behaved the same way. Then GCC switched. We definitely tripped over the problem of not looking up if the standard said anything.

            We now have to replace 'X = Y;' by 'tmp = X; Y = tmp;' when we generate code (X computes the offset of a field of an object, and Y might trigger the GC, moving the object away), which definitely enforces the evaluation order and is hopefully optimized away. It is somewhat ugly, though ...

            1. By Anonymous Coward (91.0.235.145) on

              > > I have absolutely no sympathy for this kind of lack of foresight. I've spent enough time trying to work around such idiotic decisions in various ports and other software. Sometimes, the easy path is not good enough. You really should think things through.
              >
              > Fair enough and valid argument. We had targeted many CC compilers, all behaved the same way. Then GCC switched. We definitely tripped over the problem of not looking up if the standard said anything.
              >
              > We now have to replace 'X = Y;' by 'tmp = X; Y = tmp;' when we generate code (X computes the offset of a field of an object, and Y might trigger the GC, moving the object away), which definitely enforces the evaluation order and is hopefully optimized away. It is somewhat ugly, though ...


              Having to still cope with this sort of shit in 2007 is the real problem of the world.

              Tools teach people.
              I'd like the other way around.

              1. By ethana2 (68.96.129.230) ethana2@gmail.com on

                Well, I'm mostly a GNU and CC-BY-SA person, but I'll be sure to give the BSD license a good read before I go putting out code.

                Again, we do want to be sure that we don't get too divided by licenses. If all works were under attributed public domain, I would be quite content without all of this legal complexity.

                I think the next ten years will work a lot out of this mess, you know, once people stop buying software...

                1. By Anonymous Coward (67.64.89.177) on

                  > Well, I'm mostly a GNU and CC-BY-SA person, but I'll be sure to give the BSD license a good read before I go putting out code.

                  You'll be bored quickly. It's only one sentence:
                  * Permission to use, copy, modify, and distribute this software for any
                  * purpose with or without fee is hereby granted, provided that the above
                  * copyright notice and this permission notice appear in all copies.

            2. By David Jones (82.152.227.241) drj@pobox.com on http://drj11.wordpress.com/

              > > I have absolutely no sympathy for this kind of lack of foresight. I've spent enough time trying to work around such idiotic decisions in various ports and other software. Sometimes, the easy path is not good enough. You really should think things through.
              >
              > Fair enough and valid argument. We had targeted many CC compilers, all behaved the same way. Then GCC switched. We definitely tripped over the problem of not looking up if the standard said anything.
              >
              > We now have to replace 'X = Y;' by 'tmp = X; Y = tmp;' when we generate code (X computes the offset of a field of an object, and Y might trigger the GC, moving the object away), which definitely enforces the evaluation order and is hopefully optimized away. It is somewhat ugly, though ...

              It sound like you haven't read Boehm's "Simple garbage-collector-safety", http://portal.acm.org/citation.cfm?id=231394&coll=portal&dl=ACM&CFID=35600066&CFTOKEN=65714172 . This sort of stuff is standard operating procedure for anyone targetting C in a GCed environment.

              David Jones

              1. By Bernd Schoeller (schoelle) on

                This is getting off-topic, so I will stay brief (and try to remain polite):

                Boehm does not describe a moving GC, so he does not need to cope with objects being moved around, which was the cause of our problems (it is remotely related, but not the same as his KEEP_LIVE macro, as you probably know).

                The core of the discussion is that the C standards provide much slack and a compiler moving freely within the bounds of the standard between its releases without notice creates problems. So, my original posting welcomed the idea of having a compiler that is more conservatively developed than GCC. No more, no less.

                This has indeed nothing to do with the "real bugs" that Marc mentioned. Also, Marc remarked correctly that had we started by only relying on the C standard definitions, we would not have made a wrong assuption about the evalutation order in the first place. But it is sometimes difficult to differenciate between "All compilers do it that way by accident" and "All compiler do it that way because the C standard says so somewhere" (at least upfront and for mere mortals).

                1. By David Jones (82.152.227.241) drj@pobox.com on http://drj11.wordpress.com/

                  > The core of the discussion is that the C standards provide much slack and a compiler moving freely within the bounds of the standard between its releases without notice creates problems. So, my original posting welcomed the idea of having a compiler that is more conservatively developed than GCC. No more, no less.

                  Using C as a back-end is a dangerous business, as you are finding out. Perhaps you should be using C minus minus; it's deliberately designed to be targetted by compilers.

    2. By Anonymous Coward (85.178.82.0) on

      > This is great news. GCC is a moving target, and they do not care.
      >
      > Lets give a small example from our own experience: We had major problems with GCC 4.x, because they just switched order of evaluating "target = source" (3.x: first the source expression is evaluated, then the target expression; 4.x: other way around). When complaining, the answer was just: "It is not specified in the C standard, so we can switch it whenever we want!". Do you want to build you tool chain on such a reliable compiler?
      >
      > I am looking forward to the integration of 'pcc' into standard OpenBSD, kicking GCC into the ports system, where it belongs. Keep up the good work!

      I hope it's not just a smal fire flaming up...I hope Theo gets the nuts needed to enforce to get a RESULT.
      I mean I never understood why he didn't made ANY steps in this direction years ago even...

      As u pointed out: GCC is a bit..well.. Casino-Like...

    3. By Anonymous Coward (82.212.49.168) on

      > Lets give a small example from our own experience: We had major
      > problems with GCC 4.x, because they just switched order of evaluating
      > "target = source" (3.x: first the source expression is evaluated,
      > then the target expression; 4.x: other way around). When
      > complaining, the answer was just: "It is not specified in the
      > C standard, so we can switch it whenever we want!". Do you want to
      > build you tool chain on such a reliable compiler?

      You really are - as it noted in sibling post - idiot.

      Not only in your expression lvalue is computable, but it depends on rvalue?? How more idiotic - and suicidal - it can be????

      Even without looking into standard, I can tell you that the code is wrong, the code is thrash and its author should be shot dead just to take him out of genetic pool.

      P.S. And this should be you reading for night - http://www.faqs.org/docs/artu/ch01s07.html

      -- pissed-me-who-had-to-debug-such-crap-code-for-living

  15. By George (166.70.196.201) on

    I think it's a worthy goal. I've personally reviewed the gcc 4.x sources, because I wrote a precompiler for use in some projects. My precompiler uses lex and yacc to generate C code. The actual language is similar to C that the precompiler accepts, but it has single and double linked-list keywords, and some other things. A good grammar doesn't accept some of the things that gcc does. Partly because gcc doesn't use much of a yacc grammar, so the verification is not ideal.

    With gcc I've found it would accept IIRC: enum {foo, bar, baz, }; Note the extra trailing ",". The way they implement the extended asm syntax parsing in gcc is also very ugly. I used yacc to do the heavy lifting for the extended asm. The __attribute__ syntax can also be tricky, and invasive.

    For example:
    function_definition
    : declaration_specifiers gnu_attribute declarator declaration_list compound_statement
    | declaration_specifiers gnu_attribute declarator compound_statement
    | declaration_specifiers declarator compound_statement gnu_attribute ';'
    | declaration_specifiers declarator ASM '(' asm_expression ')' gnu_attribute_list ';'

    There are more, and basically it was frustrating to add attribute support. In any case, I hope the BSD developers have a better time than I have implementing it.

    1. By Anonymous Coward (78.147.106.71) on

      > With gcc I've found it would accept IIRC: enum {foo, bar, baz, }; Note the extra trailing ",". The way they implement the extended asm syntax parsing in gcc is also very ugly. I used yacc to do the heavy lifting for the extended asm. The __attribute__ syntax can also be tricky, and invasive.

      There is, in general, good reasons to put a trailing comma on long enums and similar. It means patches to add extra items change fewer lines and so are less likely to cause a conflict. I'm not sure whether or not the C standard allows it, though other languages do.

  16. By Anonymous Coward (82.212.49.168) on

    Please, somebody tell Gentoo folks about that thing.

    "5-10 times faster" would make bunch of people on other side of licensing fence happy too.

  17. By grey (208.80.184.30) on http://www.advogato.org/proj/Kencc/

    So, what about kencc?

    uriel did quite a bit of work to get this relicensed under the MIT license and released it a couple of years ago, I believe he posted the announcement to an OpenBSD list (part of the incentive was he heard that Theo was fond of kencc in brevity).

    Anyway, never hurts to have more compilers (we used to post stories about TENdra back in the day too) but seems like kencc might be good to check out as well.

    kencc

    1. By Anonymous Coward (74.13.45.175) on

      > So, what about kencc?

      I have yet to see the source to kencc under a licence other than the Lucent one.

      1. By iru (201.19.44.205) on

        > > So, what about kencc?
        >
        > I have yet to see the source to kencc under a licence other than the Lucent one.

        i don't know if you can read, but the advogato page says
        LICENSE: MIT

        or if you want to read it all http://gsoc.cat-v.org/hg/kenc/file/f7b378582848/LICENSE

        i ask the same question:
        why not kencc? it's up and running on openbsd.

        1. By Anonymous Coward (74.13.45.175) on

          Source code with licence, not random person with no source naming a licence.

          1. By iru (201.19.44.205) on

            > Source code with licence, not random person with no source naming a licence.

            if you can't find your way through this web interface, i don't think you would understand the source code.
            as i am a patient man, here's what you were unable to find yourself: http://gsoc.cat-v.org/hg/kenc/file/f7b378582848

            1. By Anonymous Coward (2001:6f8:94d:5::2) on

              As far as I know, the version of kencc that comes under the MIT
              licence is the one in Inferno, not the one in Plan 9. The former
              lacks ELF or a.out output support, and I couldn't get any ECOFF
              executable to run at all.

              You might be interested in this one though:
              http://cvs.mirbsd.de/ports/plan9/kencc/

              1. By iru (146.164.37.217) on

                > As far as I know, the version of kencc that comes under the MIT
                > licence is the one in Inferno, not the one in Plan 9. The former
                > lacks ELF or a.out output support, and I couldn't get any ECOFF
                > executable to run at all.
                >
                > You might be interested in this one though:
                > http://cvs.mirbsd.de/ports/plan9/kencc/

                by the page you mentioned this seems just the kencc tree has been worked for installing as MirPorts. it doesn't seem to exist any real porting work on it.

                the one I mentioned is in still being developed. started as a project for google summer of code 2007, it supports ELF and blablabla - as explained in http://gsoc.cat-v.org/projects/kencc/.
                crap, can't you people read?

  18. By Anonymous Coward (165.21.154.14) on

    It seems many do not see the benefit of a BSD licensed compiler. Here and on Slashdot, many think its a waste of time to develop another compiler. Some see its a good thing because it is faster. Faster is not the point. It can even be slow and inefficient for the start. PCC's advantage over GCC is license. Why?

    Day by day computer CPU chips getting faster, most importantly becoming multi-core. Recently Tilera (www.tilera.com) released a processor with 64-cores. And they are working on a 128-core processor. Intel also working on a similar multi-core processors. These multi-core processors are important for video encoding, simulation, gaming, etc. Techniques of efficient code generation for such chips may be only known to the manufacturer. May not be even possible release specs of internal architecture in such a competitive world.

    Its because PCC is BSD licensed, Tilera, Intel, AMD and other processor manufacturers can develop code generation modules in closed-source and release to PCC. What OEMs require is to sell their chips without exposing internal working. Its really up to them to release in either closed-source or open-source. So that BSD users can benefit by the tremendous computing power these processors provide. The PCC can even provide a compelling reason for people to switch from Linux to BSDs. Can these OEMs release such a closed-source module/driver for GCC? No, why? GCC is GPL-based which requires them to offer the source code. That is why, OEMs release such modules/drivers to Microsoft Windows. Therefore, a BSD licensed compiler is a very good news. What the PCC developers should do is, design it in such a way that anybody can extend its functionality by way of external plug-in modules. What people out there who are competent should do is help PCC.

    Sagara

    1. By art (213.56.159.23) on

      > Faster is not the point.

      Really? I guess we were wrong all the time in all the fights to try to convince gcc people to not make the compiler slower in every release they make.

      Time wasted on waiting for compilations to finish is time you don't write or debug code. The one person doing the most to improve OpenBSD in the past few years has been espie because he focused on getting make and other tools faster so that we'd spend less time twiddling our thumbs and more time hacking.

      1. By Anonymous Coward (165.21.154.13) on

        > > Faster is not the point.
        >
        > Really? I guess we were wrong all the time in all the fights to try to convince gcc people to not make the compiler slower in every release they make.
        >
        > Time wasted on waiting for compilations to finish is time you don't write or debug code. The one person doing the most to improve OpenBSD in the past few years has been espie because he focused on getting make and other tools faster so that we'd spend less time twiddling our thumbs and more time hacking.

        If you want to compile faster, compile on a Tilera :)

        1. By Anonymous Coward (70.143.100.114) on

          > If you want to compile faster, compile on a Tilera :)

          When can the developers expect your Tilera donation to arrive?

    2. By Anonymous Coward (151.188.18.58) on

      > It seems many do not see the benefit of a BSD licensed compiler. Here and on Slashdot, many think its a waste of time to develop another compiler. Some see its a good thing because it is faster. Faster is not the point. It can even be slow and inefficient for the start. PCC's advantage over GCC is license. Why?
      >

      I disagree. I will explain why, point by point.

      > Day by day computer CPU chips getting faster, most importantly becoming multi-core. Recently Tilera (www.tilera.com) released a processor with 64-cores. And they are working on a 128-core processor. Intel also working on a similar multi-core processors. These multi-core processors are important for video encoding, simulation, gaming, etc. Techniques of efficient code generation for such chips may be only known to the manufacturer. May not be even possible release specs of internal architecture in such a competitive world.
      >

      Bullshit. LSI Logic's RAID cards are excellent. They release their specs, which is why OpenBSD supports them so well. Ralink and Realtek wireless cards also are pretty good; the specs are open without stupid NDA's. I'm sure that Intel would *looooove* to somehow be able to keep its processor architecture closed, but they know they won't sell their processors if they don't. Same for AMD, especially with the competition that Intel provides. Also, AMD is *FINALLY* publishing the specs for the ATI GPU's without NDA. So yes, it is possible in "such a competitive world."

      > Its because PCC is BSD licensed, Tilera, Intel, AMD and other processor manufacturers can develop code generation modules in closed-source and release to PCC. What OEMs require is to sell their chips without exposing internal working. Its really up to them to release in either closed-source or open-source. So that BSD users can benefit by the tremendous computing power these processors provide.
      >

      Then let those companies write their own friggin' compilers and not freeload off of the free software community's work! Yes, that's right, let them write their own. Nobody's pointing a gun to their heads to use free compilers. If Microsoft can write their own compiler, then so can Broadcom, Marvell, nVidia, and the rest.

      Having binary blobs in the operating system is emphatically *NOT* a benefit. That includes a closed compiler; it's easy to add backdoors to every compiled binary, as Brian Kernighan and Dennis Ritchie pointed out years ago.

      http://home.worldcom.ch/pgalley/infosec/sts_en/crime.html


      > The PCC can even provide a compelling reason for people to switch from Linux to BSDs. Can these OEMs release such a closed-source module/driver for GCC? No, why? GCC is GPL-based which requires them to offer the source code. That is why, OEMs release such modules/drivers to Microsoft Windows.
      >

      At least partially wrong. The OEMs release modules/drivers to MS Windows because MS strongarms them with threats if they don't, and bribes^W provides "co-marketing dollars" to them if they do. And having binary blobs in my OS is not my idea of a secure, reliable platform. Have you not even read the wireless driver issue that Theo and the others have been talking about for years now? Have you not read about the 3ware problem (I use LSI RAID for exactly this reason)? You actually want to further BINARY BLOBS in our operating systems?

  19. By gwyllion (193.190.253.149) on

    What is next? Transforming pmdb into a gdb replacement? I see art has some plans for pmdb on his todo list ;)

    1. By Anonymous Coward (74.13.45.175) on

      > What is next? Transforming
      >
      > I see art has some plans for pmdb on his todo list ;)

      Well, basically yeah. OpenBSD is pretty much removing GPL bits, bit-by-bit, so you notice OpenCVS, sendbug, tar, blah, blah, blah... All sorts of GPL stuff replaced. Perhaps not pmdb, but since it's a debugger that is started already, it may well prove to be the basis of the pdb.

      1. By Anonymous Coward (85.178.91.130) on

        > > What is next? Transforming
        > >
        > > I see art has some plans for pmdb on his todo list ;)
        >
        > Well, basically yeah. OpenBSD is pretty much removing GPL bits, bit-by-bit, so you notice OpenCVS, sendbug, tar, blah, blah, blah... All sorts of GPL stuff replaced. Perhaps not pmdb, but since it's a debugger that is started already, it may well prove to be the basis of the pdb.

        I coredumped gdb by debugging a coredump....

        So how retarded can a debugger be.. and it's on a x86... not some fancy arch nobody uses.

        1. By art (213.56.159.23) on

          > I coredumped gdb by debugging a coredump....

          pmdb does that quite often as well.

          It was never meant to be more than a helper for bootstrapping architectures.

  20. By Chris (82.109.154.146) on http://www.chrisjsmith.me.uk/

    This is good news for everyone. Although it's not kernel-ready, a small, fast BSD license compiler is what OpenBSD needs. It's a major step towards a completely single-license, politics free bundle of software (an admirable goal in my mind).

  21. By Clay Dowling (12.37.120.99) clay@lazarusid.com on http://www.lazarusid.com

    What real world benefits will I see as a developer? I frankly don't care about all the license holy wars. What I want is a compiler that works out of the box and doesn't require that I jump through a lot of hoops to build my software. If switching to PCC means that I'm going to have trouble building third party software, I'm not going to be a fan of it and I'll keep building with gcc.

    Decreased build times are great and all, but I don't run "make world" every day. Let's not switch compilers just because we don't like the politics of another license. "Works everywhere" is a lot better for end users than "has politics we like".

    1. By Pierre-Yves Ritschard (pyr) on http://spootnik.org

      > What real world benefits will I see as a developer? I frankly don't care about all the license holy wars. What I want is a compiler that works out of the box and doesn't require that I jump through a lot of hoops to build my software. If switching to PCC means that I'm going to have trouble building third party software, I'm not going to be a fan of it and I'll keep building with gcc.
      >
      > Decreased build times are great and all, but I don't run "make world" every day. Let's not switch compilers just because we don't like the politics of another license. "Works everywhere" is a lot better for end users than "has politics we like".
      >
      >

      Now, I know there are a lot of questions, but you might want to go back and read them, all of your questions - which have been asked numerous times - are answered, you're not making any point here.

      1. By Pierre-Yves Ritschard (pyr) on http://spootnik.org


        > Now, I know there are a lot of questions, but you might want to go back and read them,

        s/questions/comments

      2. By Bob Beck (129.128.11.43) beck@openbsd.org on



        > Now, I know there are a lot of questions, but you might want to go back and read them, all of your questions - which have been asked numerous times - are answered, you're not making any point here.

        Come on pyr, it's web comments. Posters are supposed to ignore everything that was said before, it's like an unwritten rule of being part of the unwashed masses. Don't you read slashdot? ;)

      3. By Clay Dowling (12.37.120.99) clay@lazarusid.com on http://www.lazarusid.com


        > Now, I know there are a lot of questions, but you might want to go back and read them, all of your questions - which have been asked numerous times - are answered, you're not making any point here.

        This may shock you, but I did read the threads. And all I got was "it's faster" and a lot of noise about licensing holy wars. As I previously mentioned, faster doesn't affect me directly. Marc explained nicely how faster will affect me indirectly, and that is something that was missing from earlier discussions. As for the licensing issue, I'm just not charging the barricades over the licensing terms on software. It's not my issue. Good code (or at least as good as I can produce, which is a different thing) and reliable tools, those are the things that I care about.

        And don't waste your time telling me why I should care about your favorite license. As well discuss vocal technique with a pig.

        1. By Anonymous Coward (128.2.116.53) on

          > This may shock you, but I did read the threads. And all I got was "it's
          > faster" and a lot of noise about licensing holy wars.

          The real benefit is that it's simpler and easier to maintain, so it's going to be more bug-free and more hassle-free than gcc.

    2. By Marc Espie (163.5.254.20) espie@openbsd.org on

      > What real world benefits will I see as a developer? I frankly don't care about all the license holy wars. What I want is a compiler that works out of the box and doesn't require that I jump through a lot of hoops to build my software. If switching to PCC means that I'm going to have trouble building third party software, I'm not going to be a fan of it and I'll keep building with gcc.
      >
      > Decreased build times are great and all, but I don't run "make world" every day. Let's not switch compilers just because we don't like the politics of another license. "Works everywhere" is a lot better for end users than "has politics we like".
      >
      >

      Maybe faster turn-around on snapshots, and thus better reactivity ?
      maybe *you* don't build the world each day, but we do. And having a faster compiler may mean we get more time to try experiments...

      Or perhaps just better diagnostics ? it might be simpler to coerce pcc to output useful warnings, or output various statistics and finer information on the source files it's compiling ?

      I don't know. If it proves easier to work with than GCC has been (and especially *less* encumbered with ideological issues), I'm all for it.

      Did you know that there's been some significant work floating around that allows GCC to dump its intermediate state and load it again ? This patch has been explicitly *rejected by the FSF* because it was making it too easy to plug proprietary back-ends/front-ends on the compiler...

      1. By Clay Dowling (12.37.120.99) clay@lazarusid.com on http://www.lazarusid.com

        > Maybe faster turn-around on snapshots, and thus better reactivity ?
        > maybe *you* don't build the world each day, but we do. And having a faster compiler may mean we get more time to try experiments...

        Thank you Marc. That is a real world benefit that I can appreciate. I definitely like the way new and useful features show up, and if cooler features can start coming down the pipe, or more new features, that's a benefit I can appreciate.

    3. By Anonymous Coward (146.164.37.217) on

      > What real world benefits will I see as a developer? I frankly don't care about all the license holy wars. What I want is a compiler that works out of the box and doesn't require that I jump through a lot of hoops to build my software. If switching to PCC means that I'm going to have trouble building third party software, I'm not going to be a fan of it and I'll keep building with gcc.
      >
      > Decreased build times are great and all, but I don't run "make world" every day. Let's not switch compilers just because we don't like the politics of another license. "Works everywhere" is a lot better for end users than "has politics we like".
      >
      >

      are you a developer or end user?
      and tell your parents to stop telling you your needs are the main ones.

  22. By Anonymous Coward (194.109.21.4) on

    FreeBSD has also imported PCC into their ports tree: http://www.freshports.org/lang/pcc/ too bad I run FreeBSD/AMD64.

  23. By Anonymous Coward (80.120.1.196) on

    What about the TCC Compiler?
    http://fabrice.bellard.free.fr/tcc/

    I think this project is still in development, maybe would it be easier to help to make the TCC better instead of starting a new project.
    The license is LGPL I think.
    But I don't know if the TCC is better as PCC for compiling the BSD's (the author has probably tried to compile the kinux-kernel, but no BSD).

    Sorry for my bad english, i am from germany.

    1. By Anonymous Coward (129.128.11.43) on

      > What about the TCC Compiler?
      > http://fabrice.bellard.free.fr/tcc/
      >
      > I think this project is still in development, maybe would it be easier to help to make the TCC better instead of starting a new project.
      > The license is LGPL I think.
      > But I don't know if the TCC is better as PCC for compiling the BSD's (the author has probably tried to compile the kinux-kernel, but no BSD).
      >
      > Sorry for my bad english, i am from germany.

      LGPL license, so forget it. in ports, fine, not in base.


  24. By Motley Fool (MotleyFool) motleyfool@dieselrepower.org on

    and now if you want to follow the pcc mailing list the archive can be found here.

  25. By Anonymous Coward (83.138.136.90) on

    I'm kind of gutted, I was about to start a new BSD licensed OpenBSD system compiler in a couple of weeks time when I get back, I was quite looking forward to it, however if this project is in I should really turn my attentions to contributing.

    I guess the first goal is to provide basic support for all the different archs not currently supported. All the old tricks like SSP need to added too. Makefiles probably need to be adjusted, ok I'm getting excited again.

    Anyone know of a BSD licensed debugger ?

    1. By dingo (192.85.50.2) af.dingo@gmail.com on http://1984.ws

      > 
      > Anyone know of a BSD licensed debugger ?
      > 
      > 
      
      pmdb
      
      "The pmdb debugger was written because the author believed that gdb(1) was too bloated and hairy to run on OpenBSD/sparc64."
      

      1. By Anonymous Coward (91.84.210.4) on

        > pmdb
        >
        > "The pmdb debugger was written because the author believed that gdb(1) was too bloated and hairy to run on OpenBSD/sparc64."
        >

        Cool, cheers, I'll check it out.

  26. By jay (114.143.121.210) rockworldmi@gmail.com on

    Any news on this article?

Credits

Copyright © - Daniel Hartmeier. All rights reserved. Articles and comments are copyright their respective authors, submission implies license to publish on this web site. Contents of the archive prior to as well as images and HTML templates were copied from the fabulous original deadly.org with Jose's and Jim's kind permission. This journal runs as CGI with httpd(8) on OpenBSD, the source code is BSD licensed. undeadly \Un*dead"ly\, a. Not subject to death; immortal. [Obs.]