OpenBSD Journal

OpenBSD Gets Harder to Crack

Contributed by jose on from the favorably-reviews dept.

Grégoire was one of a few to write: "I'm surely neither the first nor the last to report that eweeks has an article on OpenBSD 3.3 titled "OpenBSD Gets Harder to Crack".,3668,a=42536,00.asp

"eWEEK Labs has used past versions of OpenBSD for a number of years in our lab for network firewalls as well as in OpenHack security tests and have come to trust the product's rock-solid reliability and secure-out-of-the-box configuration. It's free to download or $40 for a CD version."" It's a short review, but pretty spot on. This kind of exposure helps improve the visibility of the project, bringing in new talent and sales.

(Comments are closed)

  1. By fondula di carceri () on

    from the article...

    " The OpenBSD project has made a decision against trusted-operating-system-style mandatory access controls that place kernel-enforced limits on what particular processes or users can do. "People who use such things build systems which cannot be administered later," said Theo de Raadt, OpenBSD project leader, in Calgary, Alberta. "I am holding the fort against such complexity." "

    Ouch.. I'd really like to see some access control mechanisms; we only have systrace and it works fine, but RSBAC would be nice (or the patches from TrustedBSD)..

    To any capable coder: I'm willing to donate a large giftbasket of various excellent belgian beers to the one who ports RSBAC to openbsd (if possible) :)

    1. By Noryungi () n o r y u n g i @ y a h o o . c o m on mailto:n o r y u n g i @ y a h o o . c o m

      What is, I think, even worse is that eWeeks points out that OpenBSD is a pain in the neck to upgrade for sysadmins.

      Does MAC+tough upgrades = a less interesting OS? Or a more difficult one to administer?

      Please note that this is not a troll, just an honest question: I have limited experience with OpenBSD, and I am trying to understand the pro's and con's before moving several [important] servers from Slackware Linux to OpenBSD...

      1. By Michael Anuzis () on

        I'm no OpenBSD guru, but I'd say OpenBSD isn't difficult in the least to upgrade in 90% of situations. Some people may call me a hypocrit because it was just last month I was asking for help updating OpenBSD2.6 to 3.3 but that's a special exception.

        From my experience OpenBSD is usually very easy to upgrade. The only recent exception would be with the switch from a.out to ELF, and even that wasn't that bad.

        OpenBSD provides a page that tells you all the major changes between the versions that you should be concerned about when upgrading and tells you exactly what to do about each:

        Not that scary, right?

      2. By djm () on

        I have never found OpenBSD difficult to upgrade. In fact, it is about the easiest system to keep secure of them all - when a patch comes out: cvs update ; make build...

        As for 6-monthly upgrades, these are made very easy through CVS again. "cvs diff -r OPENBSD_X_Y etc" and merge the changes.

        If you are used to Slackware, then OpenBSD will be a walk in the park. You don't have RSBAC there.

      3. By Petr R. () on

        I was once windows only admin, after that I tried various linuxes.
        As today, I use OpenBSD only for about two years and i have no problem running it even on my notebook (and aprox. 10 machines - radius,apache,dns,ftp,samba and tens of other services) for everyday usage. Do not be scared by that article it is to small to cover all aspects.
        If you do now need fancy GUI for administration, and you propably don't as Slack admin, there is no point why to be scared.

      4. By James () quel AT quelrod DOT net on mailto:quel AT quelrod DOT net

        I upgraded from 3.1 to 3.2 via a source tree build and then slowly merged in net /etc conf files. It was fairly painless except for the fact that the build system was a ppro 200.
        For 3.2 to 3.3 I was replacing the old box so I had got to setup a new install and swap out the boxes.
        A friend of mine was trying to go 3.2 to 3.3 and somehow hosed the system and ended up doing a clean install.
        BTW clean installs are recommended in the docs due to all the changes in setuid, confs, etc. But, we are all meticulous sysadmins that don't miss 1 line of a change, right ;)

      5. By krh () on

        I don't think OpenBSD is more difficult to administer. If anything it's easier. OpenBSD has a very unified feel to it--everything is put together in a sensible way, and everything interacts well with everything else. (It isn't always perfect, but OpenBSD tries very hard and does an excellent job)

        I don't know why the article complains about upgrades. OpenBSD is one of the few systems out there that I think you can safely not patch for six months. If anything, OpenBSD decreases the amount of patching and maintenance you have to do because you need to patch only every six months, not every two weeks when another bug comes out. I'm not saying that you should never patch more often than every six months (that would be bad), but even then I think you'd do less maintenance.

        I happen to think that the regular six month release schedule improves stability--it forces all the really disruptive patches into the tree right after a new release is made, and then they have six months to stabilize. Compare this with Linux: where the time between stable releases is years and disrupts their tree so much that the first release of a new stable tree (e.g., 2.2.0, 2.4.0) is often pretty bad, to the point where major vendors like Redhat do not ship it.

        All that said, I wouldn't move your several important servers from Slackware to OpenBSD until you know how to use OpenBSD. If you have a spare machine, make sure that you can set it up to do the tasks that your Slackware servers are doing right now and make sure to write down what you did to get them to work before you move them to OpenBSD. If you don't have a spare, get one--your employer will not appreciate hours of downtime while you read the man pages.

        Good luck!

      6. By Henning () on

        updating/upgrading is easy.
        people whine because there is no ./ or the like. it is not really need, either. a few changes to /etc by hand and you're done. or even easier, diff between unpacked etc32.tgz and etc33.tgz and apply that diff - easy.

        1. By Kenny Gryp () on Http://

          Or you could use mergemaster, found in /usr/ports/sysutils/mergemaster.

          Old post @ deadly about mergemaster:

    2. By Anonymous Coward () on

      I would love to see MAC too. I think Theo is totally, totally wrong on this. MAC isn't about covering your ass in case someone breaks in - it's about being able to share or protect files on a multi-user system! Users should be able to protect files from root and vice versa, as well as being able to share files with other users knowing noone else can access them. MAC is about privacy and control over documents - it has nothing to do with network security. It really appalls me to think this is something NT has had for years and the UNIX world is still struggling to implement it.

      1. By coldie () on

        protecting files from root? you don't find that to be a very major security concern?

        and granted, controlled access and access control lists both have their ups and downs. controlled access is far simpler, and as i understand, less tedious and faster too.

        1. By coldie () on

          but another thing i just remembered, doesn't solaris use controlled access with the option to append acl permissions via setfacl? how difficult would it be to implement something such as this on obsd?

      2. By Timothy Dyck / eWEEK Labs () on

        I like MAC too :) (more for security protection from root-level programs that get cracked than anything else, though).

        However, Windows does not have MAC (though there are third-party packages like Network Associate's Entercept or Cisco's Stormwatch that will add this capability). If a process running as Administrator or LocalSystem (and there are unfortuantely many of these on a default Windows Server install) gets cracked, your system is wide open. As a user, you can set ACLs to deny access to the administrator, but any process with administrator rights can take ownership (a privilege that cannot be revoked) and then modify the ACLs.


        1. By Matt Ostiguy () matt at ostiguy dot com on mailto:matt at ostiguy dot com

          FWIW, in 2003, Windows appears to have more granularity of these service accounts - on the test box behind me, there are 6 services running under the builtin "Network Service" acct - dhcp client, dns client, license logging service, performance logs, dist. transaction coord, and rpc locator. everything else is running under the "local system" acct. I haven't done enough digging around yet to offer much else, but you are right, windows does has some quirks - the ability to flush audit logs and inability to really stop that, etc

      3. By RC () on

        > it's about being able to share or protect files on a multi-user system!

        I can do that just fine with standard Unix permissions.

        > Users should be able to protect files from root

        I can't think of any system where users can prevent "root" from accessing any files.

        > MAC is about privacy and control over documents - it has nothing to do with network security.

        Well, MAC is about _slightly_ finer-grained control over permissions. Indeed it has nothing to do with network secrutiy.

        I dare say, about the only thing to be done to make OpenBSD more secure (that isn't happening) is to switch it over to a microkernel system, and I don't suspect that will be happening. (I think TCP/UDP port ACLs would be good, but Systrace does do the job nearly as well)

        1. By Charles Hill () on

          >I can't think of any system where users can prevent "root" from accessing any files.

          Trusted systems, like Trusted Solaris and Trusted AIX. "Root" doesn't really exist. The functions of the root user are broken up into multiple administrative accounts each with their own sphere of responsibility. There is no one "god" account, like in traditional Unix.

          The "root" account that can change security tokens and MACs so reading someone else's files is possible is NOT the same "root" that has control of the logging system. Thus, he can't wipe his tracks and full audit logs are still available.

          MACs aren't about "network" security, they are about "multiuser" security. "Network" is only a small part of that.

          1. By RC () on

            > Trusted systems, like Trusted Solaris and Trusted AIX. "Root" doesn't really exist.

            Well then you can't really prevent root from getting access can you? However, this is always a backup account, which much have access to everything by nature... That is enough of a God account as far as I'm concerned.

            BTW, I am well aware of this, no need to explain it to me.

            > and full audit logs are still available.

            That is pratically the sole advantage of MACs, and you could accomplish the same thing on a normal system. eg. Logging all events to a remote machine. Logging to any linear device (such as a printer), etc.

            > MACs aren't about "network" security, they are about "multiuser" security.

            MACs aren't about security at all. Get over it.

      4. By mike () on

        you miss the point, it's not just about having features, it's about having features:

        - that are thoroughly documented
        - that actually work as per specs
        - that you can turn off if you do not need the functionality and/or they do not work as advertised
        - ...

        if you've ever played with securing windows boxes, you should know that none of the above is applicable to NT or any subsequent Microsoft product.

        besides, what's the point of having really nifty access controls on a system that out of the box can be compromised just by opening a browser or reading an email? none.

        features are cool, but they do have to be implemented within a consistent framework.

        I don't think the "UNIX world" needs more "warm fuzzy" security mechanisms.

  2. By abe () on

    Keeping an OpenBSD system up-to-date is also very demanding for system administrators. Configuration files in /etc need to be manually migrated during version upgrades (which ship every six months), and security patches are released only in source code form. A binary patch distribution tool would make it much easier to deploy OpenBSD systems in larger numbers.

    I strongly disagree with their "very demanding" claim .. but I do wonder, are binary patches not feasible? As someone mentioned in another post, compiling on an old p200 isn't the end of the world, but can be more time consuming than I would otherwise prefer ...

    1. By Matt () on

      I agree with you and think a oficially supported binary patch system wouldn't be a bad thing for openbsd. Yes I can upgrade via source and all that foo, but sometimes I admit I'm lazy and don't want to read pages of docs to upgrade a system.

      Note the key words 'officially supported'. I know there is a binpatch project out there that has been mentioned here on deadly a few times. look around for it.

      1. By Bdoserror () unsafe@any.speed on mailto:unsafe@any.speed

        The other advantage to a binary patch is that it means you don't have to have the build tools on your production machine, which reduces the effectiveness of root kits. If they can't compile their own tools, they are easier to find.

        1. By Anonymous Coward () on

          Not necessarily. You can recompile the system on another machine and do binary patches yourself, if you wish. It's not that hard and you win two times.

        2. By zil0g () on

          because as an eleet haqah i only bring .c files over to your box, i never actually compile them on any of my own machines before i r00t you.

          thank you and goodnight.

      2. By ts () on

        There is an official binary patch system already:

        cd /
        tar xzfp /tmp/base33.tgz (and the rest of them except etc33.tgz)

    2. By Patcher () on

      Binary patches are called rdist or rsync, work best over network

    3. By Timothy Dyck / eWEEK Labs () on

      One comment on "very demanding" -- it's fine if you have one box, but eWEEK's print readers work in large corporate sites. Trying to take 30 or 60 OpenBSD boxes, say if you were running a Web farm, through an update cycle will force an organization to develop its own software distribution mechanisms to push out the updates. That's a big contrast to the software update agents and centralized management that other OSes offer. I also am not a fan of leaving a compiler on a production box as it's a security risk. An authenticated update distribution mechanism with signed updates is also a security risk, but a minimal one.

      Regardless of that, I'll pass on a good suggestion from Ron Rossen, who pointed out an application in the ports tree called MergeMaster.

      >The mergemaster script is designed to aid you in updating the various configuration and other files associated with OpenBSD. The script produces a temporary root environment using /usr/src/etc/Makefile which builds the temporary directory structure from / down, then populates that environment with the various files. It then compares each file in that environment to its installed counterpart. When the script finds a change in the new file, or there is no installed version of the new file it gives you four options to deal with it. You can install the new file as
      is, delete the new file, merge the old and new files (as appropriate) using sdiff(1) or leave the file in the temporary root environment to merge by hand later.

      I've done exactly this process by hand a few times now and this sounds like a good step forward.

      Tim Dyck
      eWEEK Labs

      1. By Anonymous Coward () on

        I'm sorry, but if you have that large of a cluster of machines, one would think you would have competent people around to look into:

        * known utilities that do this painlessly
        * whip up a script or distro mechanism to take care of this, e.g. expect

        Even smartly manually doing this upgrade would take less than 3 hours tops.

        I run 3 XP boxes. I have a more difficult time with windows update than I have ever had with my 6 OBSD toys. Funny how someone with a job in computing can complain about updating a cluster when others that know how sit without such a job. Baffling.

        Seems to me that the large corporate site with 30 to 60 servers does not knowingly exist to eweek. (To those confused, I'm saying that was made up, on the spot, as an example in retrospect, to defend their statements; not something that was known ahead of time when the article was written. I'm not saying there aren't 60+ server farms out there running OBSD. I'm saying eweek probably doesn't have a case example which they knew about previously.)

        1. By Timothy Dyck / eWEEK Labs () on

          Hi there,

          My point is that the manual upgrade process is a factor that system administrators need to add to the mix in their evauation of whether OpenBSD is the right choice for them. In the review, I try to identify all the major pros and cons for our readership in trying out any new thing.

          I do feel that the process of keeping OpenBSD machines up-to-date is labor-intensive, and that the amount of effort involved in updating a system will grow with the number of systems involved. That's why I ended that paragraph with the statement, "A binary patch distribution tool would make it much easier to deploy OpenBSD systems in larger numbers." [because of the relatively larger effort of maintaining a group of machines compared to what other Unix operating systems offer]

          Now that certainly doesn't mean the process of batch copying updated files remotely to a group of machines using a remote file copy or directory sync tool against a master machine combined with some remotely run scripts isn't an option for distributing patches or doing OS upgrades. (I looked up the expect tool [ ] you mentioned, by the way; remote piping of keyboard input to a group of machines would be one way to do this.) Any large installation of OpenBSD would have the competent people you mention develop this infrastructure as a matter of necessity.

          What would be great would be to have that infrastructure shipping with OpenBSD so it wouldn't have to be developed and deployed on an ad hoc basis. I mentioned one tool that helps with the process in a previous post. If you or others with larger deployments have found particular techniques successful, that would be something very interesting and helpful to post.

          This all begs the question of is this just a theoretical issue; as in, do large deployments of OpenBSD exist at all in the field. I am unaware of any, and the 30-60 OpenBSD machine farm I did suggest as an arbitrary example.

          I didn't ask about large deployments when interviewing Theo de Raadt for the review; my feeling is that this is not a high priority on the development list, and fair enough; that's a decision for OpenBSD's developers and suitably motivated user sites to make. I do think there are fewer than there would otherwise be given the limited support there is for this kind of deployment. If there is an example of this, please post! It would be a fascinating case study.

          However, on the topic of whether there are large corporate sites with 30 to 60 servers in deployment at eWEEK reader sites -- or more many more -- that is most certainly the case, and has been for some time. This is not an artificial example or a straw man argument.

          We have a reader advisory board at eWEEK and talk to them every month both publicly [,3959,35243,00.asp ] and privately in a group conference call about their IT priorities. These are the people whose concerns I consider foremost when evaluating products because they are selected from a deliberately broad cross-section of our readership, and the need to centrally support large numbers of systems in geographically distributed environments is definitely both a reality and an issue. Gannett (which publishes USA Today), for example, has a very sizable Web and Web application farm, and Nordstrom has a large distributed operation among all their stores. In a job previous to eWEEK, I worked at the networking group in a Northern Telecom production plant, and we had several hundred network devices and a few hundred servers supporting about 5000 onsite client workstations. The ability to centrally manage that networking gear (in roles that OpenBSD could have handled very well) was definitely an issue there as well.

          Having said all this is probably to overemphasize the point. OpenBSD has lots of great things going for it despite ongoing maintainance issues (such as how infrequently you are forced to patch it in the first place!) and there are definitely spots in the network where it makes a lot of sense to deploy it.

          Tim Dyck
          eWEEK Labs

          1. By jose () on

            hi tim

            great article, thanks for the reviews, and thanks for coming here and clarifying your perspective. i think you're right, it is a challenge we should look at overcoming. one method we could leverage to do that is the package system. i think its flexible enough that the base directory could be set up as /. in this case, simply build a package on your deployment box and roll a package. now push that out to your clients, pkg_add it, and voila, you're back up in business. (provided everything else is the same.) a local package base could do it. i don't think more than one saavy bsd admin in a large organization would be required. this winds up not being a lot different than other unix patches or even win32 rollout and patch staging/testing.

            1. By djm () on

              You don't need to make packages, just do a "make release" on a build server and dump it out to all your machines:

              for host in `cat /xxx/machinelist' ; do cat base33.tgz | ssh $host "cd / ; tar zxpf - ; mv /bsd /obsd" ; scp bsd $host:/ ; done

              You could use rsync too, if one felt so inclined.

              1. By Anonymous Coward () on

                You don't need to make packages, just do a "make release" on a build server and dump it out to all your machines:

                That's a lot more to transfer and change on your servers, and among other things means your inbox will be full of many very long messages from root listing all the binaries that have changed. Not that i don't use this method, just that it is a hastle to go through the messages.

                One thing hinted at in a previous post is that no other OS gets updating perfect either. Microsoft patches open new vulnerabilities or slow the system down. Solaris patches change permissions on directories/files. RedHat rpm's lead to dependency problems and unstable kernels. In each case, it is up to the sysadmin(s) to test before rollout. It's not like other patching systems are as easy as "i clicked the right checkbox when i installed so i'm set for life".

                Perhaps OpenBSD is better w/o official binary patches because it forces me to install binaries that i have verified work correctly since my build machine is running them.

                OpenBSD is a hands on OS which, through its lack of robust update/patch tools, forces good sysadmin skills: knowing what is running on each system, keeping test systems and using them, having a good understanding of the OS, etc etc. Anyone who handles a lot of other systems knows they need these skills, but those other OS's try to introduce themselves as if you won't need those skills.

          2. By Anonymous Coward () on

            > If you or others with larger deployments have
            > found particular techniques successful, that
            > would be something very interesting and helpful
            > to post.

            There are some scripts to automate the upgrading process. You can even use them to create a release and install it on your compilerless machines.

            I can remember two, posted here on deadly:

          3. By Matt () on

            I think the most important point that the author made in the parent post is that administrators shouldn't all have to create an ad-hoc upgrade script for their site, it should be included in the system in _one_ tool. Mergemaster and portupgrade are great steps forward towards this goal, but there is certainly room for growth.

            Sometimes you just have to complete a task and don't have time to engineer your own homebrew script. I'm not asking for a pointy clicky upgrade tool, just something that can update/upgrade a base system.

            I'm thinking of a combination of "fastest cvsup", portupgrade, mergemaster, build generic system, build generic kernel, reboot -> tada new system. If people want to muck around with the finer details more power to them, but you shouldn't have to. How many people customize their system to the point where they honestly have to run all these steps by hand? Not many I imagine.

          4. By mike () on

            hi Tim,


            our mexican friends are already there as is probably pointed out in the links mentioned above and that I am too lazy to check... I haven't tested though.

            from my admittedly limited experience, although I have worked in pretty large corporate settings,a 30-60 machine rollout is a complex task on any software platform, hardware and network connectivity issues notwithstanding, and the fact that tools exist does not necessarily an easy upgrade/patch/rollout make.

            if we're talking windows here, I would love to see 30-60 2k boxes patch themselves seamlessly without something dying along the way elsewhere than in a PR release.

            As an aside, an interesting read is Microsoft's own case study of moving Hotmail from FreeBSD to Win2k, if you're into such gory details...

            that said anything that makes life easier is good.



          5. By Anonymous Coward () on

            i have 6 openbsd servers in different locations and i upgrade all of them like this:

            lynx --dump|sh

   is a script that downloads tarballs and extracts them, and the tarballs are made at one build machine via make release

            very easy, very fast, binary.

            1. By Anonymous Coward () on

              lynx --dump|sh

              sounds like a great opportunity for a mitm attack.

              1. By Anonymous Coward () on

                Please. Use ssh, which while not a cure all, cures a heck of a lot, including mitm. On a private network with defined address space, this would also be acceptable although not necessarily as neat.

                I sure as hell wouldn't be caught using lynx either, but the point of the suggestion poster is well taken. It's easy.

          6. By Anonymous Coward () on

            As the person you responded to, I think you know the real reason and are simply bypassing it.

            Having lived at various yearly stints in the Chicago and DC area, I think you then need to widen your views.

            *I* think it is a strawman. I have a couple of college friends in the Chicago area still, who sys admin machines for trading companies who work next to the Chicago mercantile system. They don't work with OBSD on the actual machines of course nor on the backend servers, due to software, support, hardware/software silver service contracts with HP, etc., as you can imagine (iow, they want a finger to point to if something goes awry), but they mess with OBSD machines as THEIR systems while working, esp. when they feel security or need to push out software distribution.

            They seem to have no problem updating their machines en masse.

            Meanwhile and more recently in DC, I know people who assist with web publishing (not server administration) for USA Today and This is more article upload, not server administration of course. But it was regular commentary about whatever they were running biting the dust. Not OBSD, but given the environment, downed machines took away hours of work time. So I'm not getting your point in handpicking installations THAT DO NOT RUN OBSD as an excuse to defend your comments.

            You can't point to your lack of knowledge of such large installations as support to defend an erroneous statement. That's not journalism.

            People don't work with OBSD because they don't KNOW about it. Those that know about it frequently do not USE it. I know dozens of people who know of OBSD. Besides myself, probably 8 use it. Go to parties, yap and talk, they say yeah, it's great, then ask a pointed question, and the house of cards falls--they don't use it.

            I hate to come off attacking someone who generally gave very good press on OBSD, but these sorts of unsupported comments (you even state that you did not ask) really tick me off.

            For others that are a little clueless about large installations--most commercial installations do not use the best tool for the job. They use what they are familiar with. What they can hire admins for. What their end software will be supported on. OBSD doesn't meet these requirements, not because it's more demand or not good enough, but because it's not the common path.

            If profit is to be made by folks that use such large installations, OBSD is near immediately out for the simple reason that there is no centralized hardware or software vendor to pick a bone with, no one directly to blame, *no one to pass the buck.* It has crap to do sizable sites or network configuration or demanding network installs, updates, or maintenance.

      2. By RC () on

        > That's a big contrast to the software update agents and centralized management that other OSes offer.

        A CVS up and a make && make install work more consistently and reliably than any other update method I've even HEARD about. It's not labor intesive at all because it can easilly be automated with a single script. CVS updates and compiling the source are also quite quick after the first time it's done.

        > I also am not a fan of leaving a compiler on a production box as it's a security risk.

        There is no security risk in leaving a compiler on a production box. It is not a privlidged application (eg, not SUID or GID), nor is it in the kernel or any other way loaded into privlidged parts of the system. Therefore there is no potential security problem. The only excuse you could have is that it makes it slightly easier for script kiddies to compile their scripts, but that's not security, it's obsecurity and poor obfustication.

    4. By RC () on

      Extract the sources onto your hard drive and compile them in the background (ie. nice 20). Then, when a patch comes out, all you need to do is a CVS update and a make. Make manages the changes in source files, so it will only need to recompile those parts which the patch has changed. Do it that way and it certainly will not be very time consuming.

    5. By Peter Hessler () on

      Compiling the source doesn't bother me. I have a shell script that does it for me. Works on all of my boxes.

      The problem they have, is the difficulty in going between releases. Every release has some manual build steps involved in it, and I agree, it is annoying. Not a show-stopper for me, but I can see how 10 minutes x 60 boxes can be a deterrant to a company.

      Don't know what the proper solution is.

  3. By Peter Hessler () on

    I am quite suprised, and happy, that we have the author from eWEEKS commenting, and clarifying his position.

    Took plenty of guts to defend (what appears to be) an unpopular opinion on a public board.

    1. By grey () on

      We should most definitely be pleased to se this kind of favourable and rather accurate press as well as the well thought out responses.

      That said, I'm surprised to see the amount of discussion regarding the mention of lack of MAC and patching mechanisms. OpenBSD, while great, isn't perfect. There are occasions where MAC levels [chmod and chflag aren't the be all end all of what I want out of file system permissions granularity for certain], and Timothy's mention of a the manual effort required for patching is valid. In light of the fact that this article is overwhlemingly positive, I don't see the trouble people have with accepting some legitimate critiques.

      Now, I'm not trying to say that just because he has a point is any reason why there should be -official- solutions to meet the needs of these features. As many have pointed out, there are already 3rd party systems which can make the task of system patching simpler, and I still can't quite figure out why trustedbsd is still so focused on Free currently instead of making it more cross platform. In particular, if you look at patching frameworks in other environments (e.g. windows) you'll see that while say, Windows Update has a nice simple 'Install' button; they have a history of having botched things (e.g. the recent NAT-T IPSec patch breakage). Since MS has done such a crap ass job, I know of at least 5 companies who base their business around offering better patching deployment options. Some aren't much better in that they just follow the MS software (e.g. Shavlik); others are better still and track patching 3rd party software; one in particular that looks super intriguing is from
      Not only does their product let you patch & install MS & third party products, but it's cross platform! Crazy, just think - a central patching system for Windows, various unixes, they even mention something about IOS! Now, if a commercial product such as that added OpenBSD support too; I think that might be far more appropriate than some half-hearted attempt within OpenBSD officially.

      Of course, you never really know what the next priority is going to be. For years we saw discussions on how non-exec stacks were semi pointless; well look where we are now! But also note that when energies were devoted towards that, the implementation was done rather thoroughly (even as it continues to evolve). If in the future we do see a more official binary patching system, I would hope that it's thought through pretty carefully. Even though something is often better than nothing - in the case of maintaining a system, you really need to be careful about what you depend on as you can potentially do more harm than good.

      Remember, thanks to the licensing [especially as even the ports are beginning to undergo a deeper licensing audit now] you can take OpenBSD and extend off of it. There are certain things that quite frankly suite legitimate needs, but that doesn't always mean that they should be part of the defaults or official release; or at least not straight away - a 3rd party system can often serve as a model to evaluate to see what does and doesn't work, so that hopefully an ultimate official implementation will have some success at avoiding pitfalls that others had to work through.

      I mean, I know that's not always the way to accomplish things, as some lessons are best learned through mistakes. But I think a lot of OpenBSD users have grown very accustomed to the fact that snapshots and -current are usually very stable. Just because there is a legitimate need that would be good to meet within OpenBSD, doesn't always mean that it should be implemented at any one point in time if it is done at the detriment of the project as a whole.

      Anyway, not that I'm an authority or anything just some thoughts (maybe too many) on the matter.

    2. By cccck () on

Latest Articles


Copyright © - Daniel Hartmeier. All rights reserved. Articles and comments are copyright their respective authors, submission implies license to publish on this web site. Contents of the archive prior to as well as images and HTML templates were copied from the fabulous original with Jose's and Jim's kind permission. This journal runs as CGI with httpd(8) on OpenBSD, the source code is BSD licensed. undeadly \Un*dead"ly\, a. Not subject to death; immortal. [Obs.]