Category Archives: Lightning Talks

HOWTO: Using WPF 4 (and .NET 4 in general) from a .NET 3.5 app

Download HOWTO: Using WPF 4 (and .NET 4 in general) from a .NET 3.5 app slides View HOWTO: Using WPF 4 (and .NET 4 in general) from a .NET 3.5 app slides.

Transcript:

Cool, okay, so, this is my talk: HOWTO: Using WPF 4 (and .NET 4 in general) from a .NET 3.5 application.

So, here at Red Gate we write an awful lot of plugins to this application, which is Microsoft SQL Server Management Studio: Prompt, Source Control, Test, Tab Magic, SIP, Doc, Dependency Tracker, Data Generator, and now the new thing that I’m working on, which is a plugin for Deployment Manager to create packages from your database.

So, the problem is that SSMS 2008 is running on the .NET 2 runtime, which you can see here from the debugger: mscorlib v2, and SSMS 2012 is running on the .NET 4 runtime: mscorlib v4.

So, the question when we’re writing an app is which version of the runtime do we target?

So, when I started off writing the plugin for SSMS I targetted .NET 4, and this worked perfectly fine on my machine using SSMS 2012, but when you start loading it on SSMS 2008, you get this error message, which is: “This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.”

So this means that to load my application on both SSMS 2008 and SSMS 2012, I have to target the .NET 2 runtime, which is framework version 3.5.

And that’s exactly what SQL Source Control does, and it’s fine, you can write an application like that, until you try to reference NuGet.

When you reference NuGet you get this error message here, which is from NuGet installing a reference to NuGet, which is quite funny, saying that NuGet doesn’t have a DLL that targets 3.5, so you can’t reference this, unless you target .NET 4.

So, we’re now in a quandry. If we target 3.5 we can’t reference NuGet, and if we target 4 then we can’t load into SSMS 2008, but our product manager wants both of these: he wants our application to run in all versions of SSMS, because that’s where the customers are, and he wants our application to create NuGet packages, therefore, it’s going to have to call the NuGet library to do that.

So, what solution do we have to this?

So, the solution which we came up with, thanks to Mike who pointed out that this was possible, is that we have SSMS at the top, which will either be running on the .NET 2 or the .NET 4 runtime, depending on whether it’s 2008 or 2012. And then we have a child process, RedGate.SQLCI.UI.exe, and that’s running on .NET 4.

So, now any operation that we want to do on .NET 4, say for example if we want to call the NuGet library, or if we want to use the Task class in the framework, or if we want to use any of the new shiny features that are in WPF 4, we do it in the child process. And anything that has to do with SSMS, like right-click menus and so forth we can do in the top one.

The problem is that our UX designer wants us to be ingeniously simple; he doesn’t want us to just launch off a child process and have that start displaying windows and everything.

So this is obviously still in development, which is why it looks utterly hideous, but what we’ve got here is SSMS and this is a modal dialog, where the control in this modal dialog is running in the child process and the modal dialog itself is running in the parent process inside SSMS.

So to the users this behaves just like a normal modal dialog and you won’t know what’s going on, and in fact, you’ve seen all of this happen if you’re running Chrome, because Chrome spawns off a child process for each tab, as does IE nowadays, but from a user you can’t tell, because they all seemlessly integrate into one application, and this is because in Windows you can have controls running in seperate programs, and as long as they’re all running under the same user account, the security system doesn’t mind all displaying them together.

So that’s what we’re going to do.

So, how do we actually do this without falling back to P/Invoke, because obviously P/Invoke is bad and wrong? We want to use as much stuff in the framework as we can.

So, the framework defines an interface called INativeHandleContract, which has got a method on it called GetHandle, which basically gives it the window handle, and irrespective of how it’s implemented under the covers, the important thing are these two methods. The first one takes a framework element, which is some kind of WPF control and coverts it to the interface, and the second method takes the interface and converts it back to a framework element.

And the awesome thing here is that framework elements we can’t pass across a .NET remoting channel between the SSMS process and the child process, but implementations of the interface we can pass across just fine, because what happens is: in the child process we call ViewToContractAdapter, which converts our framework element into an implementation of the interface. The interface still can’t go across the boundary because it’s not serializable or MarshallByRef, but it’s an interface, so we can just create a class, which is MarshallByRef, implement the interface, and we can just proxy the method call onto the actual implementation that the .NET framework has gaven us. That MarshallByRef class can now go across the .NET remoting boundary, and on the SSMS side of things, we just call ContractToViewAdapter, which converts the interface back into a framework element, and now the SSMS process can just put it as a child control in the form.

And so that’s absolutely all you have to do; there’s some code on GitHub that does all of this, and we’re now using this.

There are some slight problems, which is…

So this is, yes, so this was on Yammer this morning: this is the number of crash reports coming into the public facing webservice every hour, and you can see that at 9:30 my tester finds a bug on his machine, and it turns out that WPF has some awesome retry logic, so if anything goes wrong WPF just retries automatically, and Ali left the thing running, and that graph happened.

The reason why this hit us was because in the SSMS process all the exceptions get displayed in nice little crash dialogs and don’t send in automatically, but, for debugging, in the child process I had set them to just send in exception reports automatically without popping up consent dialogs, which will change before release, because this sort of thing happens.

So, yeh, that’s the end of the talk. Apart from this, it just works wonderfully. Cool, any questions?

Measuring technical debt (and why it matters)

Download Measuring technical debt (and why it matters) slides View Measuring technical debt (and why it matters) slides.

Transcript:

Cool, so this came out of a training course that me and Jeff went to in London, and this was one of the most interesting bits of it that I thought.

So, measuring technical debt and why it matters.

So, it matters because you get what you measure. If you give a team a particular statistic, and say you’re going to be measured on this statistic, the team are going to move that in the direction that you want them to move it in, even if that means that the code doesn’t actually get any better.

So, we have to choose a measure for technical debt such that when we actually put in the effort to pay back the technical debt the time investment is actually worth it.

So, the talk is basically about two measurement approaches.

So this is the first one, which is the delta from ideal, and basically you define a whole bunch of metrics, where you know what the ideal looks like, and you can look at your code and see what you’re currently scoring on that metric, and then you just kinda look at where you are.

So, the first metric — so these are just example metrics, you can pick your own — so the first metric we might look at is how many compiler errors and warnings we’ve got. So, this is the SQL Compare UI, which includes Compare Engine and everything else, and it compiles on my machine — brilliant — but there’s a 1000 warnings. And, so, if we were to spend time reducing that warning count are we actually making the code better? Probably not.

ReSharper errors. There’s nearly 20,000 ReSharper errors across 2,500 files. If we put the time in to make those better, and reduce that count, are we actually making the code better? Probably not.

Code duplication. So this is using the code analysis tools in Visual Studio Ultimate across the Compare Engine, it found 58 exact matches, 75 strong, etc, etc… So these clearly, if we fix these, it would make the code better, because most of these are just, the next bug fix away, won’t fix the bug properly, because you’ll fix it in one of the things but you won’t deal with the duplicated code. So, there you are making the code better.

Next up is unit test coverage. This is SQL Source Control and the plugin that we’re writing for Deployment Manager. Again, pushing these numbers up, clearly you are making the code better.

And finally, is just our gut feel. So the previous 4 we could measure automatically, but gut feeling not so much. So this is the Work class in Compare: it’s a partial class spread across 4 files, and the files are quite big. They’re so big that ReSharper’s intellisense is a bit slow when you’re dealing with these files. And so clearly, yes, we could put time into fix this, but again, are we actually making the application better?

So, basically, in summary, you have to pick the metrics carefully, otherwise, there’ll be a high opportunity cost to fixing that metric, like the compiler warnings we started with at the beginning.

And next up, in a large application with an awful lot of technical debt, this measure is basically useless, because you’re going to end up with something like this, where it’s just insurmountable to deal with this. Or your unit test coverage will be basically 0, and it’s just insurmountable to deal with that.

So, this is the approach that the guy on the course recommended. So, instead of kinda looking at the entire application, and looking at metrics over it, you basically look at the things that you’re actually going to have to change. Because some bits of the application will just sit there and don’t need changing, but more interesting are the bits that you’re going to change and how those bits affect the technical debt.

So, for this approach you need some stories, which can either come from the backlog, or they can be hypothetical. And then you estimate them just as Scrum has taught us to, so we give them points: 1, 2, 3, 5, 8, 13, 20.

And then, the assumption behind the approach is that technical debt increases that estimate, because technical debt makes our code harder to change, and so what we get to do is: in 3 months time, we can re-estimate again, and we can see if the estimates have gone up — this means our technical debt has increased.

We can also use it after a refactoring to see if the technical debt has reduced. And interestingly, we can use it after a hypothetical refactoring, so we can say if we were to make this change to our codebase, would it make that feature that our product manager wants, or that product manager might want in the future, easier to do? And if the answer is yes, then clearly you should do it, before the feature is implemented. And if the answer is no, then if it’s not actually making it easier to add features, then is there much point actually doing that change when you could spend your time focusing on something else?

There are some interesting corollaries with this approach, the first is that actually to reduce your technical debt you don’t actually have to make the code any better. So, if you can increase the team’s understanding of the code, maybe there’s an abstraction layer that doesn’t quite work — it’s too leaky or whatever. But if the team’s understanding of the code is increased, then that means that the technical debt is reduced. So, this basically encompasses what you know, that if you replace the software developers on the team with another set of software developers they lose their in-built mental model, and so the technical debt goes up. So, if you keep the team stable over time, you can see that technical debt is essentially lowered by having a team that understands the code.

The real problem with this approach is imagine the estimate used to be 5. We do some refactoring; it’s now 3. But what does that mean in practice? Have we actually reduced the technical debt? So, what are the errors bars on this? If it’s 5 plus/minus 2, and it’s 3 plus/minus 2, then has our code actually got any better? So, it basically means that you get to inherit all of the problems that come from estimating into technical debt calculation. So you can’t really use it to kinda be quantitative, but you can use it to be qualitative, so if I did this refactoring it would reduce it, but as to how much, or whether it’d reduce it every single time — it’s quite hard…

Cool, and I’ve over-run, any questions?

Security 101: Just don’t do it

Download Security 101: Just don't do it slides View Security 101: Just don’t do it slides.

Transcript:

Okay, so this talk may actually fit in five minutes :)

So, this is a talk about Security 101: Just don’t actually do it.

So, the background for this is that there was a post by Daniel on Yammer which was basically we’re writing a piece of code in SQL Server Monitor Hosted, and we need to know how to do something in a secure way, and there was a whole 12 replies, and people came up with something, and I found a blog post where Google basically solved the same problem, only they solved it a different way because if you do it the way, the conclusion, we came to, it leaves you open to a misinterpretation attack. Because they’re quite complicated to explain, I’m going to pick a different example instead.

So, the point of this talk is to show you that something that seems so trivial and such a good idea actually is not.

So, in this hypothetical world in my example, you’re working for a company that has a couple of products. It’s got a web browser which is used regularly by 45% of people on the internet, and it’s got a web server which is visited by 90% of people on the internet. You should be able to work out which company I’m talking about here; it’s not entirely hard.

Cool, so, the product manager comes to you; you’re one of the developers, and says it has to go faster. We want our web browser and our web server to work awesomely fast together, because the application is people doing internet searches, where showing results is really really important to happen very quickly. And users have very slow internet connections, especially their upload, so if we can take into account of that in our design.

And what the product manager thinks is the feature we should implement is we’re going to embrace, extend and extinguish the HTTP/HTTPS standard, by adding our own proprietary extension so that when our web browser talks to our web server we’re going to compress all the HTTP headers, and we’re going to do this even when we run HTTP over HTTPS.

Are we all following so far? Cool, awesome.

So, what are you going to say back to your product manager? So this is a show of hands. So how many of you are going to say yep that’s a brilliant feature, we should definitely go and implement that? And how many people are going to say nope that’s a terrible idea; it would introduce a security vulnerability in our web browser? Hands for that? You all know what’s coming, don’t you? :( And the third one is the one that…, votes for the third one? The third one is that it depends on what our threat model is.

So then we come onto threat models. So with security you always start with a threat model, before you do absolutely anything else, you start with a threat model.

And the quote’s from Wikipedia, so it says: Attacker-centric threat modelling starts with the attacker, and evaluates what they actually want to achieve and what technical capabilities they have in their bag of tricks to achieve that, because only when you know what the attacker wants to do with your system, do you know where it’s worth spending the time to invest, because you’ve got limited resources, limited development, and you probably should be spending those resources actually defending the bits that the attacker is going to attack, and the bits that the attacker is not going to attack probably doesn’t need as much of your time.

So, now we come onto how the…, so let’s say that we have actually compressed the HTTP headers, this is what the attacker is going to do.

So the attacker’s goal in this case is to obtain a login cookie, so that they can impersonate a user on a target site. And the capabilities that the attacker has is observing the network traffic, e.g. on a public Wi-Fi network, Starbucks, whatever. And the other capability the attacker has is to get you to visit their evil site, which will run some Javascript. And what the Javascript is going to do, is it’s going to add images into the DOM, and crucially when they add an image into the DOM, because the attacker can observe the network traffic, he can see the length of the request that’s being sent to the webserver. And then a second later, when he adds the next image into the DOM, he gets to see the length of that request that gets sent to the webserver. And he can do all of that because he can observe the network traffic.

So, this is just a bit of background about HTTP headers for those of you that don’t know what they are. So they look a bit like this, so, basically, the bit in green is all constant across every single request. The bit in red is the bit that the web browser must keep secure, this is the authentication cookie, so if the attacker gets the bit in red he wins, and the bit in blue at the top is the actual page which we’re requesting off the web server.

So, now this is how the attacker attacks the compression of HTTP headers. The attacker can change the DOM on his site, just insert a whole bunch of images into the DOM; images that come from the target site he’s attacking. So the first time round he inserts an image with the URL (it’s going to 404 but that’s not really important), so with the URL DeploymentManagerAuthenticationTicket=0. The next time round he inserts an image with that one, and then next time round he inserts an image with that one.

And the point is that now we’re compressing the HTTP headers (these are all HTTP headers), the top one is going to compress better than the bottom two, because it’s got a longer repeated string, which means that because the attacker’s on the public Wi-Fi network, just as you are, he gets to see the length of your request, and he gets to work out the first character of the cookie is 0, and he gets to repeat the same thing to get the next character, and he can do all of this with the number of requests he has to make is the number of characters in the cookie times the number of possibilities for each character, which is fairly small.

So, this is why you can’t implement the feature your product manager says.

And this is the last slide. So the takeaway from the talk is that a feature that seems so simple will have a security vulnerability in it that you can’t reason about, so basically just don’t write this kind of code. Use an existing library if you can. If OpenSSL doesn’t have a function to compress the HTTP then there’s a reason, you shouldn’t build it on top, because there’s a reason it’s not in the underlying library. And if you can’t use an existing library then you’ve got a big problem, which is beyond the scope of this 5 minute talk :)

Cool, and that’s it. Any questions?