log4j / log4shell infos


I had to spend part of my weekend keeping an eye on the log4j hole, mitigations, and in general digging who found what, who recommends what, etc.

I’ve collected those links and infos for sharing.

Since I shut down my Confluence instance due to all the pretty security issues, unfortunately I had to put it online as a gist file on github for now.

Introduction to the NFX


Today I’m starting this off with a little series. I’m writing about one of the wildest VM hosts you could ever try.
One of the most capable firewalls on the market,
an incredible implementation of OpenVSwitch with DPDK. A management for PCIe virtual functions that lets you control them down to the virtual wire level.
_And sometimes the cheapest XeonD server you can find on Ebay. (*)

The Juniper NFX series!

Currently the series consists of these models, all 1U height, varying between 4 and 16 CPU cores. With local SSD (not just for storing a few packet traces), enough RAM (between 16 and 128GB) for VMs and lots of switched gigabit and 10gigabit ports.

This is what they look like:

image

vCPE and Services on a chain

Their intended use case is as a platform for so-called virtual CPEs. Meaning, features that your ISP sells to you that improve your network connectivity, insights etc. – and you have them delivered not in a big box, you don’t pay for another appliance, you don’t need to manage the lifecycle and h/w maintenance of another appliance, etc. The price points of those pieces end up lower meaning you can maybe also run them in places where it would not have been economical.

examples could be…

  • running a WAN optimizer right on the edge of your network, practically IN the cable.
  • running a virtual load balancer
  • running two firewalls

The NFX takes care of all the networking tasks in this, you can essentially plug together the virtual machines inside, one into another, and also do anything you want with the physical ports it gives you. So a certain port could be plugged into a VM directly. A port could be in a VLAN that is hooked up to a few VMs. Your WAN traffic could land in JunOS, undergo some modification, then be passed to a Palo Alto or FortiGate VM and then hit a k8s cluster on your lan. Or just a VM running a static webserver in the same box. That’s something the telco world calls ‘service chaining’

The feature list is pretty much endless, you can see more here:

https://www.juniper.net/us/en/products-services/sdn/nfx-series/datasheets/1000563.page

I have tried to keep this inside… but… but… it’s a bit like what the Unifi Dream Machine Pro is for the home/SMB network, just for the big multinational carriers/ISPs and the clients of theirs that need a redundant 10g link to a ‘small site’.
If wanting to be able to deploy a new service using VMs or containers within minutes after it’s been ordered (And I guess that’s what they call a virtual network function)

A bit more serious, this is one of the most capable systems you can get your hands on. Capable in terms of broadness of possibilities and puzzle pieces falling into place.

There’s better individual solutions for many things, but the sum is the interesting part here, and seems unparalleled to me. There’s still quite a few things they need to work on to make it round enough to be worth considering for anyone NOT a national

(*) caveat: and then you’ll find out there’s no factory-reset command or pinhole. But I wanna invite you to keep reading for a bit.

What will we look at

  • The hardware
  • The current state of the software

I’ll not try to explain everything, especially since I don’t understand everything myself. Instead I’ll also share the links that I used for understanding the hole stuff.

And I’ll show you practical uses, meaning various virtual thinggies running on the NFX.

Classical NFV like Firewalls, Analyzers. And, say, my mail server. Because, why not.

PC build


This time a very simple post:

A nice PC build for my daughter

 

  • Antec P110 Silent

  • MSI B450M Mortar Titanium

  • G.Skill Aegis DD4-3000 16G

  • Ryzen 7 2700
  • MSI Radeon RX Vega 56 Air Boost 8G OC

Two parts I already had were a (not perfect) Xylence power supply and a pretty fast Toshiba 512GB NMVE SSD. If I didn’t have had those, I’d have gone with these

  • Seasonic Focus Plus Gold 650W
  • Samsung 970 EVO Plus 512GB

Some details:

I picked the Antec case due to my really good experiences with the Antec Solo years ago. I used to have 2*15K U320 disks in that one and it kept them silent – it was just fine really. So I expected about the same for the newer, larger tower. I was a bit worried about the dust filters – would they be able to handle two cat’s hair production?

The RAM is cheap so if you want to also use this system for work (VMs) then get 2 of these kits for having 64GB. The board has two M.2 slots so you can also add a 1TB 970EVO or an Optane if you would like to play with ZFS (as ZIL). Generally, the performance difference between the 512GB and 1TB EVO is minimal. This a change over earlier times, i.e. the 256GB vs. 512GB Pro’s. Probably the reasons are the larger SLC cache areas and the 3-core processor having more capacity. In any case, that’s why I had picked the 512GB EVO for the original build.

The Vega 54 is a compromise – MSI overprices the 1660Ti and the Vega 64 is still a bit too expensive. The Vega 54 has – at least with MSI – the best price/money.

 

Build notes:

The case is stable, made of metal, has the PSU on the floor and a pretty OK cable management.

It includes some dampening mats on the doors, the disk sleds are well-made and also with some acoustic dampening in the doors. Everything is just solid and fits perfectly! The fans are a lot better than what I got from NZXT years ago. A downer was that the ATX shield gets mounted with some free space at the sides. I don’t get that and sincerely thing it was a CAD mistake they fixed in hardware. Also they were a bit stingy on the mobo screws and didn’t include M.2 screws. Simple things that make me happy if done and a bit unhappy if not. Also, the case is not really wanting you to have a DVD/Bluray drive or such. It seems there’s really NO slot. So you’ll need a USB3.0 drive on your desk instead. Oh, and the screws are all in one of the drive cages. I spent some time searching. I didn’t find a way to take of the cable management shield and the graphics card holder got lose, but it was, uh. it was fine.

Here’s where you find the screws:

In summary the case is a bit heavy, but nonetheless really one of the best cases I’ve ever laid my hands on.

The mainboard and the graphics card BOTH some protective shrink wrap on them, be sure to search and check you find all of it. The graphics card also had a protector on the PCIe connector. Waste of time really, but ok.

The board looks gorgeous and the layout is really good.

You just need to make sure to put cables into SATA0/1 before putting the GFX card. The M.2 SSD you can even mount as last time. The stock AMD fan required me to remove the fan mounts from the board and screw it directly on the retention bracket. I hated that part a lot, but it’s not MSI’s fault. Something to be aware of. The stock AMD fan is also bit noisy but the case takes care of that. The case also supported a HDMI port on the top front IO panel, but the board didn’t have an internal HDMI connector. I didn’t even know such a thing can exist. M.2 screws came included with the mainboard! I found them after searching for alternatives for some time. At this point I also noticed that the case doesn’t have HDD LED, Reset button or any of that. But TBH I didn’t care. On the really weird side the mainboard still has a LPT header. Just in case you still got your old Laserjet 4 around, I guess?

The graphics card is one large brick of computer stuff, so your only interaction is to take of the PCIe protector 2-3 pieces of protective stickers, and plug it in, and the power, too. It wants 2*8pin power and seems to be able to draw like 180W. I’m pretty worried if the old PSU can do that by any means. It has 2*8pin but in reality it’s just 1*8pin. Well, if the box crashes under more load I’ll just go and replace the PSU. Likely it’ll be good enough for League and such stuff. It’s 650W and the box should take around 350W at MAX load. The card has some weird LED stripe that informs you about – i think – it’s power draw.

Finally, the SSD? By the spec it has half the write perf of the EVO and I have to tell you that windows shuts down in about 0.5s and boots in something like 3 or 4 seconds.

The price of the box is – depending on luck – somewhere between 900 and 1150 euro. All parts are good enough for 5+ years, but also can be thrown out without having wasted a lot of money. I think the only bad thing is that it’s not PCIe 4.0 yet. (as far as I know).

I wanted her to have a box that isn’t just good enough for a year and then a few years later isn’t even worth fixing/upgrading (like the current one she did have), but instead something that is a good investment. I think this is gonna do the job pretty well.

If I’d build it for me, I’d probably put two SSDs and a lower-clock-more-cores CPU, 32GB RAM (my old PC from basically 2010 has 16GB now) and of course I’d ta lower-spec GFX card. Maybe I’d also get a different CPU fan, but it wasn’t really bad.

 

Ah yeah, windows. Fuck that. I am glad it installed so fast and that I didn’t have to fix daughter’s old PC.

Gonna install chocolatey for saying “choco install chromium” and I wish I could have used Windows LTSB so she’d not even have IE installed.

Happy easter everyone.

Bacula Packet size too big


So I ended up getting the dreaded “Packet size too big” error from Bacula.

I wasn’t sure when it started, either with the High Sierra update, or with some brew update.

The error looks like this:

*mess

20-Oct 12:53 my-bacula-dir JobId 0: Fatal error: bsock.c:579 Packet size=1073741835 too big from “Client: Utopia:192.168.xx.xx:9102. Terminating connection.

*quit

It can be reproduced simply by doing “status client”, and will also happen if you try to do a backup.

If you look into the error you’ll find an entry in the bacula FAQ that handles windows specifics, and how to proceed if it’s not the known causes they explain.

Get a trace file using -d100, and report on the list.

So, the first thing I found it won’t make a trace file, at least not on OSX.

You can alternatively use -d100 plus a tcpdump -c 1000 port 9102 to get reasonable debug info.

While looking at the mailing list I also found that the general community support for any mention of this error is horrible.

You’re being told you got a broken NIC, or that your network is mangling the data, etc.

All of which were very plausible scenarios back in, say 2003, when bacula was released.

Nowadays with LRO/GSO and 10g nics it is not super unimaginable to receive a 1MB sized packet. For a high volume transfer application like backup, it is in fact the thing that SHOULD HAPPEN.

But in this case people seem to do anything they can do disrupt discussion and blame the issue on the user. In one case they did that with high effort, even when a guy proved he could reproduce it using his loopback interface, no network or corruption involved at all.

I’m pretty sick of those guys and so I also did everything I could – to avoid writing to the list.

Turns out the last OSX brew update went to version 9 of the bacula-fd while my AlpineLinux director still is on 7.x.

Downgrading using brew switch bacula-fd 7.xxx solved this for good.

Now the fun question is: is it either TSO has somehow influencing bacula 9, but not 7, and my disabling tso via sysctl had no effect?  or is it they did at last allow more efficient transfers in newer versions and that broke compatibility BECAUSE for 10 years they’ve been blaming their users, their users’ networks and anything else they could find?

Just so they’d not need to update the socket code?

There are other topics that have been decade-long stuck, and I wonder if they should just be put up as GSoC projects to benefit the community, but also anyone who can tackle them!

  • multisession fd’s (very old feature in commercial datacenter level backup, often it can even stream multiple segments of the same file to different destinations. Made sense for large arrays, and makes sense again with SSDs)
  • bugs in notification code that cause the interval to shorten after a while
  • fileset code that unconditionally triggers a full if you modify the fileset (even i.e. if you exclude something that isn’t on the system)
  • base jobs not being interlinked and no smart global table
  • design limitations in nextpool directives (can’t easily stage and archive and virtual full at the same time for the same pool)
  • bad transmission error handling (“bad”? NONE!). At least now you could resume, but why can’t it just do a few retries, why does the whole backup need to abort in the first place, if you sent say 5 billion packets and one of them was lost?
  • Director config online reload failing if SSL enabled and @includes of wildcards exist.
  • Simplification of multiple-jobs at the same time to the same file storage, but all jobs to their own files. ATM it is icky to put it nicely. At times you wonder if it wouldn’t be simpler to use a free virtual tape library than deal with how bacula integrates file storage
  • Adding utilities like “delete all backups for this client”, “delete all failed backups, reliably and completely to the point where FS space is freed”

 

It would be nice if it doesn’t need another 15 years till those few but critical bits are ironed out.

And if not that, it would be good for the project to just stand by its limitations, it’s not healthy or worthy if some community members play “blame the user” without being stopped. The general code quality of bacula is so damn high there’s no reason why one could not admit to limitations. And it would probably be a good step for solving them.

Some more Windows stuff?


 

 

FYI I did some more windows things 🙂

Below a few lessons learned and some links that were helpful.

Non-Routing

Seems Windows has broken handling of ICMP redirects since Win7 was introduced.

They’re bad, but they’re also turned on in Windows by default (can be configured via some special corner in GPO) and they are not respected. According to docs it should result in a 10-minute routing table entry, but it never does.

So, even temporary hacks: No, remove them, rebuild it right away. Better than debugging a broken kernel!

routing

so we found we needed to push some extra static routes to our test clients via DHCP.

How to do that, especially if your DHCPd is from the last decade?

This is how:

http://thomasjaehnel.com/blog/2010/01/pushing-routes-via-dhcp.html

Domain controller backups

Normally, Windows always a backup in a configurable location. By default, the backup should also go to the NTDS folder. I recommend you check it out, because we reproducibly found the backup file is not there.

The most perfect howto / KB article for that whole kind of stuff seems to be here:

Active Directory Database Maintenance

A secondary help could be this one:

http://eniackb.blogspot.de/2009/06/active-directory-database.html

 

Windows Repair

The repair mode is missing a few commands

A, and if you wanna chkdsk remember to first use diskutil to assign a new drive letter and import your C:\ thing so you can test the right thing.

 

QEMU-QA

Still didn’t find any way to get the goddamn QEMU guest agent running well on windows.

 

SSH Key auth

I looked into being able to do key based auth and GSSAPI auth for SSH.

It seems doable, on the one end you store the key in a field named AltSecurityIdentities and prefix it with SSHKey: so it’ll match on the right data when queried.

That query is done using a helper that comes with sssd and is put in sshd_config (i think).

That means they’re not doing the plain SSH way, but i think many of the “support LDAP  certs” things in SSH have stayed in a “here’s a patch” state, so rather something well-integrated via sssd.

The GSS part seems a bit questionable with multiple parties building patched versions of PuTTY. I hope by now the official one is good enough. It seems mostly about sending the right stuff from PuTTY, not a server side ickyness.

I found one guy who re-wired all that to go via LDAP because he didn’t know there’s a Kerberos master in his Windows AD. But good to know that’s also possible 🙂

A definite todo with this would be to properly put your host keys in DNS so it’s really a safe and seamless experience. DNS registration from Linux to AD *is* possible, and with kerberos set up it should also not include security nightmares. So it’s just about registering one more item (A, PTR and SSHFP)

I would like to get that set up nicely enough that it can be enabled anywhere. My biggest worry is in a cloud context you’re instantiating the new boxes and so you definitely would have a credential management issue.

Unless I do it the hard way and create the computer account from the ONE controller, and then put the credential into the VM context/env so it’ll be able to pick it up and work with this inital token to take over its own computer account.

At that point it would be “proper” and make me happy, but I’ve learned that THAT kind of thing is what you can only build if someone needs it and pays for you.

(Hobby items should not go into the 4-week effort range. Yeah, you can build “something” in 2 days, but “proper” will take a lot longer).

I’m totally interested into some shortcut that would do a minimal thing instead of the whole.

 

QEMU:

Libvirt is hillariously stupid – we restored a VM backup image, found it unbootable. It went on like that for some time.

In the end it turns out it was a qcow2, not a raw image. I’m kinda pissed off about this since there’s a bazillion of tools in the KVM ecosys that know how to deal with multiple image times – especially qemu itself. But it’s too fucking stupid to autodetect the type. A type that can be detected as simple as doing “file myimage.img>.

 

10Gbit

We also did a 10gbit upgrade (yes, of course SolarFlare NICs) and found that our disk IO is still limited – limited by the disks behind the SSD cache. So those disks need to go.

What’s vastly improved is live migration times (3-6 seconds for a 4GB VM) and interactive performance in RDP. Watching videos over RDP with multiple clients has become a no-brainer.

I have no idea why I’m not getting the same perf at home – 10g client, 2x10g server, but RDP is much slower. It might be something idiotic like the 4K screen downscaling. All I know is I have no idea 🙂

OTOH my server has a fraction of the CPU power, too.

 

Nobrains

Finally, I again managed to split-brain our cluster and GOD DAMN ME next time I’ll learn to just pull the plug instead of any, any other measure.

(How: Misconfigured VLAN tagging – the hosts run untagged and I had a tagAll in place. Should have put the whole port to defaults before starting)

As kids we would wonder, but we didn’t know 2016


As kids we would wonder why noone treated us seriously, like grown-ups, like someone capable of reasoning.

But it’s not about that.

It’s about letting someone keep a glimpse of trust, looking back to a few years where they didn’t yet see wars starting, not being able to do anything to stop an escalation.

Families getting wiped out as a necessity (to someone, after all) on a mountain roadside because they happened to watch a spies’ assassination.

Prisons that have been given up to the inmates and just patrolled from the outside.

Watching your favourite artists die.

Hearing that a friend committed suicide.

Actually, people getting so deeply hopeless that they willingly crash their own airplane, wiping out whole school classes.

Undercover investigators who were the actual enabler of the Madrid subway bombing. Knowing how they’ll also forever be lost in their guilt, not making anything better.

Seeing how Obama’s goodbye gets drowned in humanity wondering if Trump had women pissing on <whatever> for money or not.

Then asking yourself why that even would matter considering T’s definitely a BAD PERSON so who cares what kind of sex he’s into, why can one’s private details take attention from the actual fact that he’s absolutely not GOOD?

Watching a favorite place to be torn down for steel and glass offices.

Understanding what a burnt down museum means.

Life’s inevitable bits, to be confronted with them works only if you had a long peaceful period in your life.

And that, that’s what you really just shouldn’t see or rather understand too early.

After all, there’s an age where we all tried to get toothpaste back into the tube, just because we’d not believe it just doesn’t work that way.

 

Sorry for this seemingly moody post, it’s really been cooking since that 2012 murder case. Today

 

 

On the pro side, there’s movies, Wong Kar-Wai and so many more. There’s art, and the good news is that we can always add more art, and work more towards the world not being a shithole for the generations after us.

But, seriously, you won’t be able to do much good if you look at the burning mess right from the start.

happy about VM compression on ZFS


Over the course of the weekend I’ve switched one of my VM hosts to the newer recommended layout for NodeWeaver – this uses ZFS, with compression enabled.

First, let me admit some things you might find amusing:

  • I found I had forgotten to add back one SSD after my disk+l2arc experiment
  • I found I had one of the two nodes plugged into its 1ge ports, instead of using the 10ge ones.

The switchover looked like this

  1. pick a storage volume to convert, write down the actual block device and the mountpoint
  2. tell LizardFS i’m going to disable it (prefix it with a * and kill -1 the chunkserver)
  3. Wait a bit
  4. tell LizardFS to forget about it (prefix with a # and kill -1 the chunkserver)
  5. umount
  6. ssh into the setup menu
  7. select ‘local storage’ and pick the now unused disk, assign it to be a ZFS volume
  8. quit the menu after successful setup of the disk
  9. kill -1 the chunkserver to enable it
  10. it’ll be visible in the dashboard again, and you’ll also see it’s a ZFS mount.

Compression was automatically enabled (lz4).

I’ve so far only looked at the re-replication speed and disk usage.

Running on 1ge I only got around 117MB/s (one of the nodes is on LACP and the switch can’t do dst+tcp port hashing so you end up in one channel.

Running on 10ge I saw replication network traffic to go up to 370MB/s.

Disk IO was lower since the compression already kicked, and the savings have been vast.

[root@node02 ~]# zfs get all | grep -w compressratio
srv0  compressratio         1.38x                  -
srv1  compressratio         1.51x                  -
srv2  compressratio         1.76x                  -
srv3  compressratio         1.53x                  -
srv4  compressratio         1.57x                  -
srv5  compressratio         1.48x                  -

I’m pretty sure ZFS also re-sparsified all sparse files, the net usage on some of the storages went down from 670GB to around 150GB.

 

I’ll probably share screenshots once the other node is also fully converted and rebalancing has happened.

Another thing I told a few people was that I ripped out the 6TB HDDs once I found that $customer’s OpenStack cloud performed slightly better than my home setup.

Consider that solved. 😉

vfile:~$ dd if=/dev/zero of=blah bs=1024k count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.47383 s, 434 MB/s

(this is a network-replicated write…

make sure to enable the noop disk scheduler before you do that.

Alternatively, if there’s multiple applications on the server (i.e. a container hosting VM), use the deadline disk scheduler with the nomerges option set. That matters. Seriously 🙂

 

Happy hacking and goodbye

FrOSCon Nachlese: Rudder (1)


Bei dem Vortrag hatte ich ja erwaehnt, dass es sehr praktisch ist, dass Normation eigene C-Entwickler hat und Ihre eigene Version des Rudder Agent pflegt.

Case in point, ein kritischer Bug in CFEngine, der einfach schon gefixed war. Und 30 Minuten nach meiner Frage war der Patch auch freigegeben…

15:29 < darkfader> amousset: i'm inclined to think we could also
use a backport of https://github.com/cfengine/core/pull/2643
15:30 < darkfader> unless someone tells me "oh no i tested with 
a few thousand clients for a few months and it doesn't affect us" 😉
15:33 < amousset> darkfader: it has already been backported 
(see http://www.rudder-project.org/redmine/issues/8875)
15:34 < Helmsman> Bug #8875: Backport patch to fix connection cache 
( Pending release issue assigned to  Jonathan CLARKE. 
URL: https://www.rudder-project.org/redmine//issues/8875 )
15:37 < darkfader> amousset: heh, pending release 
15:38 < Matya> hey you just released today 🙂
15:40 < amousset> yesterday actually 🙂
16:07 < jooooooon> darkfader: it's released now 😉

 

 

… es gibt naemlich einen Freigabeprozess!

FroSCon Nachlese: hosting


 

 

Die zwei deutschen “wir meinen das ernst mit dem anders sein” Webhoster waren beide auf der FroSCon.

Damit meine ich die hostharing EG und UBERSPACE .

hostsharing hatte sich 2001 nach dem Strato-GAU gegruendet, ihr aktueller (oder schon immer?) Slogan ist: Community Driven Webhosting. Sie sind eine Genossenschaft, d.h. man ist idealerweise nicht einfach Kunde, sondern redet auch mit. Es gibt aber natuerlich einen harten Kern, der die Getriebe am laufen haelt.

UBERSPACE kenne ich seit gefuehlt 2-3 Jahren. Sie machen “Hosting on Asteroids” 😉

Hier kann man den Preis selbst mitbestimmen, und bekommt eine flexible Loesung ohne den ganzen Bullshit, den die Massenhoster aussen rum bauen.

Es sind also zwei Generationen. Die Idee scheint mir dennoch aehnlich:

  • Ein Preismodell, das sich am bestmoeglichen Service orientiert.
  • Offenheit. Konzepte sind verstaendlich, ein User kann erkennen, wo seine Daten landen werden und kann klar erkennen, dass sie gut aufbewahrt sind
  • Ernsthafter Serverbetrieb (d.h. mehrere Standorte, Backups wo anders, bei Problemen reagieren, proaktives Management)
  • Nur kompetente Leute dranzusetzen! D.h. viele Jahre echte Unix-Erfahrung, nicht nur bisschen PHP und Panel-Bedienung. Alle beteiligten sind grob auf dem gleichen Niveau.

Im Gegensatz dazu der Massenmarkt, wo man oft genug lesen kann, dass wieder Kunden fuer die Sicherheitsluecken der Anbieter gestraft werden, keine Backups da waren, kein DR-Konzept vorlag, kaputte Hardware weiterbetrieben wird (“Wozu hat der Speicher ECC, wenn ich ihn dann tauschen lassen soll?”) usw.

Mit UBERSPACE konnte nicht nicht sprachen, weil ich es garnicht mitbekommen hab. Mit einem sass ich im Auto, habe aber Schlaf nachgeholt. 🙂

Der hostsharing habe ich vor allem eines geraten: Schreibt Preise drauf.

Typisch fuer Idealisten 🙂

Aber nicht zu verachten, ihr Start liegt jetzt 15 Jahre zurueck und sie haben seitdem halt einfach Qualitaet geliefert.

UBERSPACE wuensch ich das gleiche, und beiden, dass sie ihre Konzepte noch so gut skalieren koennen, dass sie die anderen Anbieter am Markt mit Qualitaet unter Druck setzen werden.

 

Weblinks:

https://uberspace.de/tech – ja, der erste Link geht zur Technik. Letztlich sind nur Technik, Team und Prozesse wichtig. Zumindest, wenn man keinen Website-baukasten braucht.
https://www.hostsharing.net/ – die Infoseite

https://www.hostsharing.net/events/hostsharing-veroeffentlicht-python-implementierung-des-api-zur-serveradministration – die Admin-API, von der sie mir garnicht erzaehlt hatten 🙂

FroSCon Nachlese: coreboot


 

Coreboot

Hab mich sehr gefreut, dass jemand von Coreboot da war. Der Stand wurde vom BSI organisiert, ansonsten helfen sie dem Projekt auch noch mit Test-Infrastruktur.

Find ich gut, das ist nahe an dem, was ich fuer die “wichtige” Arbeit des BSI halte.

Der Coreboot-Entwickler hat mir ihre neue Website (sieht gut aus, gibt’s aber noch nicht online) gezeigt, und ich hab ansonsten auch meinen viele Jahre alten Stand auffrischen koennen.

Es gibt alleine bei Intel 25 Leute, die mitarbeiten an Coreboot!

Native Hardware wird immer mehr (von 0 auf 2 oder so, aber hey!)

Es gibt ein recht tolles Chromebook von HP, das ich mir holen sollte

Die Payloads werden vielfaeltiger – ich hatte gesagt, dass ich Seabios einfach hasse, und gerne einen Phoenix-Clone haette. Er hat dann nachgefragt, und mich drauf gestossen, dass ich eigentlich ein viel besseres PXE will. (Siehe VMWare, Virtualbox, nicht siehe KVM oder PyPXEboot!)

Und er hatte einen Rat: Petitboot – ein SSH-barer PXE-loader, der so einfach alles und jedes kann, was wir uns als Admins oder QA-Engineers wuenschen.

https://secure.raptorengineering.com/content/kb/1.html

Ist auf jeden Fall auf der Testliste.

 

Was es sonst noch beim BSI Stand gab, war GPG, OpenPGP und OpenVAS.

VAS haette mich auch interessiert, aber ich glaube, ein vollstaendiges Gespraech ist besser als zwei halbe. 🙂