Attention: We are retiring the ASP.NET Community Blogs. Learn more >

Tales from the Evil Empire

Bertrand Le Roy's blog

  • When (not) to override Equals?

    In .NET, you can override operators as well as the default implementation of the Equals method. While this looks like a nice feature (it is if you know what you're doing), you should be very careful because it can have unexpected repercussions. First, read this. Then this.
    One unexpected effect of overriding Equals is that if you do it, you should also override GetHashCode, if only because the Hashtable implementation relies on both being in sync for the objects used as the keys.
    Your implementation should respect three rules:

    1. Two objects for which Equals returns true should have the same hash code.
    2. The hashcode distribution for instances of a class should be random.
    3. If you get a hash code for an object and modify the object's properties, the hash code should remain the same (just as the song).
    While the first requirement ensures consistency if your class instances are used as the key in a hashtable, the second ensures good performance of the hashtable.
    The third requirement has an annoying consequence: the properties that you use to compute the hash must be immutable (ie, they must be set from a constructor only and be impossible to set at any time after that).
    So what should you do if your Equals implementation involves mutable properties? Well, you could exclude these from the computation of the hash and only take into account the immutable ones, but doing so, you're destroying requirement number 2.
    The answer is that you should actually never override Equals on a mutable type. You should instead create a ContentsEquals (or whatever name you may choose) method to compare the instances and leave Equals do its default reference comparison. Don't touch GetHashCode in this case.
     
    Update: It may seem reasonable to say that it's ok to override Equals and GetHashCode on mutable objects if you document clearly that once the object has been used as the key in a hashtable, it should not be changed and that if it is, unpredictable things can happen. The problem with that, though, is that it's not easily discoverable (documentation only). Thus, it is better to avoid overriding them altogether on mutable objects.

  • Black hole evaporation paradox?

    I just sent this letter to Scientific American. I'd be interested to have any informed opinion on the matter.
     
    I’ve read the article about black hole computers with great interest, but there are still a few questions that I think remain unanswered.
     
    The article makes it quite clear how black holes could be memory devices with unique properties, but I didn’t quite understand what kind of logical operations they could perform on the data.
     
    But another, more fundamental question is bugging me ever since I read the article. From what I remember learning about black holes, if you are an observer outside the black hole, you will see objects falling into the black hole in asymptotically slow motion. The light coming from them will have to overcome a greater and greater gravitational potential as the object approaches the horizon, losing energy along the way and shifting to the red end of the spectrum. From our vantage point, it seems like the object does not reach the horizon in a finite time.
    From a frame that moves with the object, though, it takes finite time to cross the horizon.
    This is all very well and consistent so far. Enter black hole evaporation.
    From our external vantage point, a sufficiently small black hole would evaporate over a finite period of time. So how do we reconcile this with the perception that objects never actually enter the horizon?
    It seems like what would really happen is that as the horizon would actually become smaller over time, the incoming particles would actually never enter it.
    If this is true, and no matter ever enters it, would the black hole and the horizon exist at all?
    From the point of view of an incoming object, wouldn’t the horizon seem to recess exponentially fast and disappear before it is reached?
    If nothing ever enters the horizon, is it really a surprise that black hole evaporation conserves the amount of information?
    Does the rate of incoming matter modify the destiny of the black hole? If it grows faster than it evaporates, I suppose the scenario is modified, but how so?
    I know it is quite naïve to think in these terms and that a real response could only come from actual calculations, but still, I hope that you can give me an answer to what looks like a paradox to me. I don’t see how you can reconcile the perceptions of an external and a free-falling frame of reference if the black hole evaporates except if nothing ever enters the horizon.
     
    UPDATE: a recent paper presents a similar theory to solve the information paradox:

  • More on non-visual controls and the component tray

    Nikhil gives an excellent explanation of this and why data sources are controls (to summarize really quickly, they must be part of the page lifecycle).
    This also answers an ongoing discussion on TSS.NET about the SqlDataSource, on a subject similar to this old blog entry.

  • All abstractions leaky are leaky. All. But one?

    There's been a lot of talking about leaky abstractions lately. An abstraction is leaky by definition: it is something simple that stands for something more complex (we'll see later on that this is not entirely true in the fascinating world of physics).
    These arguments make sense until a certain point. And this point is determined by how much time will the abstraction gain you? The answer with ASP.NET is a lot of time as anyone who's developped web applications with the technology knows.
    So the abstraction may be leaky, but it doesn't matter: the really important thing is that it's useful.
    Joel's point in his paper was really to explain that at some point you'll need to learn what the abstraction is really standing for because as you use the abstraction in more and more specialized and complex cases, the abstraction will leak more and more. That's true, and the value of an abstraction can more or less be determined by the amount of time you can work with it without having to worry about the complexity that it stands for. Depending on what kind of application you develop, this time can be pretty long with ASP.NET.
    Now, what about physics? Well, in physics, there are leaky abstractions, like for example thermodynamics, which nicely reduce the complexity of the chaotic microscopic kinetic energy of molecules to very few variables like pression, temperature or volume. And the abstraction leaks if you start looking at too small a scale, or at a system outside of equilibrium. Still, it's one of the most useful abstractions ever devised: it basically enabled the industrial revolution.
    But there are more curious abstractions in physics. If we try to find the ultimate laws of nature, it seems like the closer we look, the simpler the world becomes. In other words, the layers of abstractions that we see in nature seem to become simpler as we go more fundamental. The less abstract a theory, the more leaky it seems, actually.
    Could it be that the universe is the ultimate abstraction, the only one that's not leaky?
    Well, the point is, the universe is no abstraction, it's reality. But if we're lucky and smart enough, we may someday find the only non-leaky abstraction, the one that maps one to one with the universe itself.

  • Why you shouldn't expose public properties from your pages

    We often have users asking us how they can access some variable that's in their page class from their user or custom controls.
    The answer is that your page class can expose public properties, and then any control can cast its Page property to your specific Page-inherited class and gain access to the new properties.
    But the second half of the answer is that you should really not do that even though it's possible.
    There is a double reason for that.
    The first is that it's your page that should orchestrate your controls (by accessing their properties and methods), not your controls orchestrating your page.
    And the second, which is very close, is that your controls should not depend on your page implementing special properties or methods or containing specific controls. Otherwise, you're breaking one of the most important qualities of WebControls, that is their reusability. Any control should have the ability to be dropped on any page and just work.
    Your user and custom controls should be components, that is, they should be independant, encapsulated and reusable entities. It's your page (or containing controls) only that should orchestrate the controls and glue them together. The glue should stay outside and should never ooze inside.
    A consequence of that is that your Page generally has no good reason to expose new public properties, because no one should have to consume them.

  • How to split server styles

    If you've been developing custom WebControls, in some cases, you may have had to split a server-side style on two HTML elements. Usually we want to apply the border and similar properties to a container like a div or td, and the Font properties and ForeColor to a text element such as a link (because a link forces the color and text-decoration, for example).

  • Session sharing between ASP and ASP.NET

    The question comes back every so often, so I thought I'd post about it.
     
    Almost all existing solutions are intrusive and need to modify the code of the ASP application, the ASP.NET application or both. All solutions incur a performance cost as the data has to be marshaled between the COM world of ASP and the .NET world of ASP.NET.
     
    First, there’s a solution in MSDN: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspp/html/converttoaspnet.asp but it necessitates modifications on both sides and needs a database, which could degrade the application’s performance.
     
    There are also a few commercial products, all in the $200 to $300 range:
     
    http://www.consonica.com/solutions/statestitch/ which just requires one include on top of each ASP page that uses the session. One drawback is that it doesn’t support COM objects in the session, except for Dictionary.
     
    http://www.sessionbridge.com/ which requires more code changes to all places where the ASP session is used.
     
    And then, there is my own approach, which is the only one that I know of that requires no source code change anywhere (except a few very particular cases), but it’s just a proof of concept, nothing I would use in a production environment (we have no performance data):
    http://www.dotnetguru.org/articles/us/ASP2ASPNET/ASP2ASPNET.html
     
    And a very similar attempt which may scale better:
     
    My own approach is shared source, so anyone is free to improve on it.

  • ViewState restoration does not call any setters


    public bool Selected {
     
    get {
       
    object o = ViewState["Selected"];
       
    if (o == null) {
         
    return false;
        }
       
    return (bool)o;
      }
     
    set {
        ViewState["Selected"] =
    value;
       
    if (Owner == null) return;
       
    if (value) {
          Owner.SetSelectedNode(
    this);
        }
       
    else if (this == Owner.SelectedNode) {
          Owner.SetSelectedNode(
    null);
        }
      }
    }

  • A few things I remember about quantum mechanics

    A post on the ASP.NET forums recently went a little crazy by shifting from a perfectly normal question on how to get the response object from a class that doesn't derive  from Page or Control (to which the answer is to use HttpContext.Current.Response) to quantum mechanics and the multiverse theories.
    I happen to know a few things on quantum mechanics, dating back from my PhD, so I can shed some light on these subjects (or make them even more obscure, we'll see).
     
    Here are a few things that have been said in this thread and a few comments:
    "Light will act like a wave until observed, at which time it collapses to a point."
    It would be more precise to say a photon, or quantum of light, that is, the minimum quantity of light you can get.
     
    "wherever an elemental "decision" is made (whether the light went through the top or bottom hole of the twin-hole experiment; or whether Schrodinger's cat is alive or dead), the universe splits to accommodate both decisions."
    That was the original idea of the multiverse, but we'll see that there may be a much better and simple explanation.
     
    "Another "solution" to the riddle proposed by Schrodinger's cat is the idea that light travels backwards in time, just as it travels forward."
    For that to solve any problem, all particles would have to be able to travel back in time: light is not the only way to transmit information. As a matter of facts, virtual particles are able to travel faster than light, but there's no way to observe these directly, so they can't convey any information. As far as we know, no faster than light phenomenon can transport any information. Another way to say that is that no signal can travel faster than light. If it travels faster than light, it's not a signal. One example uses a pulsar as a beacon: the pulsar sends a jet of particles in some direction which rotates with the pulsar (like a beacon). Imagine now some enormous projection screen (interstellar gas clouds play this role very well) that is able to emit some light when the particle jet hits it. If the screen is far enough from the pulsar, the spot of light it projects on the screen can move well above the speed of light (its speed is the angular rotation speed of the pulsar times the distance from it to the screen). The explanation is that the spot of light you see at one point in time was not created by the same particles as a little later. In other words, what you see is not an object moving, what you see is a succession of different objects that give the illusion of movement (the real movement is perpendicular to the screen, whereas the one you think you see is parallel to it). A similar phenomenon gives the illusion that a particle can quantum-tunnel through a barrier faster than light. It's a little trickier to explain but in this case too, no faster than light signal can be transmitted.
     
    "I know that the multiverse theory has moved on from that, and rather than splitting universes there are now bubbling multiverses and virtual multiverses"
    True, now it's a completely different theory, based on string theory. It states that there is only one universe (which is the definition of the universe after all) that has different, causally disconnected regions in which the laws of physics are different. These new "bubbles" can appear when a region of an existing bubble tunnels into a state with a lower vacuum energy, which results in the rapid expansion of this bubble as the extra energy is transformed into space, so fast that it disconnects it from the bubble that formed it. There's an excellent article about that in the September issue of Scientific American.
     
    "I mean Schrodinger was trying to explain the role of the observer in deciding the quantum state of a particle. In his experiment he assumed that the only observer was the experimenter that opened the box - until the box was opened the particle was 'in' a state of quantum uncertainty. But, what I always say when someone mentions the experiment - what about the cat???!!! Surely, it knows whether it is alive or dead!"
    Absolutely, this is what makes Schrödinger's cat thought experience completely bogus as it's usually told: the cat is an observer and is classic enough to collapse the particle's state. It's never half-dead, half alive.
    But there are real Schrödinger's cats that actually fulfill exactly the original prediction. The difference is that they are not cats, but rather small lumps of matter. Scientists are now able to make these lumps bigger and bigger, but it will always be impossible to do the experiment with an actual cat.
    What happens when you measure a quantum phenomenon has been fascinating since it was discovered, more than any other aspect of quantum physics. The reason is clearly that it is the only case in modern physics where pure chance seems to have a role: it looks undeterministic. Of course, this has been hastily interpreted by many as the finger of God, or as what enables us to have free will. I'll get back to that as soon as I've exposed a more modern theory of quantum measurement that seems to give very good results while making it all deterministic again. I can't find the references of papers about this so I'll rely on my memory here. If someone reads this and knows where to find the relevant papers, please drop me a note.
    The idea is that a measurement device is a quantum system (like everything) that has many degrees of freedom and that a measurement is actually a complex interaction with such an object. What happens is that this interaction results in the quantum object to collapse into a classical state. This theory is able to predict the time that it takes the object to collapse, and how complex an object has to be to cause the collapse. Experimental data seems to confirm this theory (I think the experiments were done at the Ecole Normale Supérieure de Paris).
    So according to this theory, there is nothing strange or random in a measurement, it's just one quantum interaction like everything. In a way, the chaos of the state of the device replaces chance. And everything is deterministic again.
    Including the human brain.
    So where does that leave our freedom of choice? Well, we would have none, obviously, if we are made of quantum particles like the rest of the universe. But that's not a problem, the illusion of it is enough.

  • Sorry about the comments on old posts

    For some obscure reasons that have to do with spam but that I failed to understand, comments are not allowed anymore on posts older than 30 days on weblogs.asp.net blogs.
    This is very frustrating and goes against the very principle of blogs. I'm really sorry about that, but there's nothing I can do oher than send internal mail to complain about it (which I already did). I just hope that this limitation is removed as soon as possible.
    For now, if you have comments, you can send them to me using the contact feature of the blog, and I'll store them until I can post them for you (yes, amazing as it may seem, even I can't comment my own blog).