A normalised database should run faster, and not slower, as far as insertions are concerned (less information to add). Full normalisation may require more join conditions than an unnormalised one, but this is traded off against a drop in data per record (less duplication). This is certainly true, as legacy systems may contain duplicated information but is would be too expensive to do anything about it now. How many tables are too many? Queries may indeed become more complicated, as there may be more join conditions. However, you do not need to cross-check duplicated information in a normalised database, so perhaps it is a case of "swings and roundabouts". Table size is reduced by normalisation, not increased. But how large is too large? Each of the following is an argument which might be used to support the use of relations which are not fully normalised. Select the weakest argument.
|
|