Explore Your Data With SubdataSheets

Having worked with Access since the days Access 2, I am always interested in changes that will improve life for my clients.    Each new version has had many new features suited to the developer or that improved the inner workings of the software but improvements for the average Access user have been very thin on the ground.  The major exception to this trend was the Access 95 version which introduced Filter by Example and Filter by Form and also added filter capabilities to tables and queries.  

On the surface Access 2000 also seemed to have little for end users with one exception.  This was a little cross that appeared on the left hand side of some of the tables when you opened them up.   This little cross seemed really neat because when you clicked on it, it showed detail from a related table.  But I soon continued testing my  converted Access 97 databases and I basically forgot all about it.  Late in 1999, I developed a new Access 2000 database for a client and in the process discovered ways to make the cross useful for the user.  This article explains how to use this cross (which is the entry point to a subdatasheet) to help the users of your software explore their data.    Interestingly I could not find any information on the feature except that I worked out it would only appear in the primary tables of one to many relationships.   An example of drilling down using the cross is shown in figure 1. Continue reading

Posted in Design and Tables | Leave a comment

Everything About AutoNumbers

AutoNumber and identity fields have some benefits—and some costs. In this article, Russell Sinclair covers just about everything you can do to manage AutoNumbers in ADO, ADOX, and DAO. Since most AutoNumber fields are used as primary keys, this is the information you need to manage your tables.

Before reading this article, see this post

When designing tables in Access, you often have to make a choice about how to define the primary key for your tables. You can eitherbase your primary key on a field (or fields) that hold real meaningful data, or you can add an AutoNumber field and base your primary key on that field. AutoNumber fields offer a number of advantages over using real data as a primary key. Relationships based on AutoNumber fields can be easier to create. Indexes based on AutoNumber fields can be updated faster by Access; as a result, inserts and updates to the tables are performed faster. In cases where the uniqueness of a table is defined by multiple columns, AutoNumbers provide an efficient way to identify and search for a particular record.

The term “AutoNumber” is unique to Access. A more generic name for these types of fields is “identity” fields. An identity field can be sequential or random. That is, the value that’s created for a new record can be generated by adding a number to the last new value, or a random number can be generated for each new record. To support sequential identities, the identity field must have two other characteristics: the seed and increment values. The seed value is the next number that will be used for this field when a record is added. The increment value is the number by which this value will be increased between each record.

Access makes it easy to add an identity field to a table. However, there are a few problems with the way you can work with them. The first and most obvious problem is that Access gives you no control over the seed or increment values. Access automatically assigns an initial seed value of 1 and an increment value of 1 to all identity fields. Another problem is that if you find that you need to change an AutoNumber field to a standard Long data type, Access requires that you delete all foreign relationships associated with the field before the change can be made. This means that you must ensure that you delete and rebuild all relationships for the field—an errorprone process. The final problem occurs when you insert a record with an identity into a table using code or an append query: You can’t determine what value will be assigned to the identity field in the new record. Although you could search the table for a record matching the entry you just made, this type of search can be slow. It would be much easier if there were some way to determine what the last identity created was.

Fortunately, Access 2000 and ADO provide the ability to overcome all of these problems.

Handling seed and increment values

One of the features I’ve always loved about SQL Server is that I can change the seed and increment values of an identity field simply by opening a table in design view and changing two properties of the field. In SQL Server, I can also change the seed and increment values using a SQL Data Definition Language (DDL) statement. The syntax for this statement is:

In this statement, tablename is the name of the table to which I want to add the new field fieldname, and seed and increment are any number within the range of the data type of the field (SQL Server allows you to use a variety of integer data types for identities). Now, with Access 2000, I can use this SQL statement against an Access table.

There are three catches to using this SQL statement with Access:

  • There’s no way to tell what the current seed and increment values are, either through DAO or ADO.
  • The SQL statement will return an error if the field is the primary field in any foreign relationships— Access prevents an ALTER COLUMN statement being run on any field that’s a primary field in an enforced relationship.
  • The statement will only work if it’s run through ADO. It won’t work if it’s run in the Access IDE or through DAO. To execute this properly, you’ll need to use code like this:

The final catch isn’t really a problem. In order to run the statement in DAO, just replace the word IDENTITY with the word COUNTER, and the statement will execute correctly. In fact, the “COUNTER” version will run in DAO also.

Fortunately for us, Microsoft added ADOX to Microsoft Data Access Components in version 2.5. ADOX stands for the ActiveX Data Objects Extensions for Data Definition Language and Security. This is a library that can be used to create, modify, manipulate, and delete database objects like tables and queries. The advantage to using ADOX is that it’s supposed to be database-neutral— the same code should work with all databases that have implemented an OLE DB provider. In ADOX, a Catalog object represents each database or data source. Each Catalog object can contain Tables, Views, Procedures, Groups, and Users. These objects, in turn, contain some collections of their own and expose some properties that might not be available through the Access IDE or DAO.

When I first started toying with ADOX against Access databases, I noticed that there were a number of interesting properties that were exposed for each of these objects in their respective Properties collections. The ADOX Column objects that represent fields in a Table have three properties that interested me in particular: Autoincrement, Seed, and Increment. The Autoincrement property is a Boolean value that specifies whether or not a field is an identity field. This property is read-only, but, if it’s set to True, I know that I’m dealing with an identity field. The Seed and Increment properties expose the identity field’s properties. These properties are read/ write, so you can change them simply by modifying the values with ADOX code. You can see the code that does this in my ChangeIdentity function in the basIdentity module that’s available in the Source Code file at www.smartaccessnewsletter.com. My ChangeIdentity function takes a table name, a field name, a seed, and an increment value as parameters and resets the values of the specified field in the table.

Changing the data type

The second challenge that I’ve faced when working with identity fields is changing them to standard Long fields. This type of situation doesn’t come up all the time, but it’s something that I’ve had to do often enough that I wanted to automate the process. The main reason that this change is difficult in Access is that the field can’t be part of the primary key in any enforced relationships. What I needed to do in order to accomplish this task was to save the relationship information somehow, delete the relationships, change the field to a Long, and then re-create the relationships from the saved information.

Actually changing the field data type is quite easy and can be done through ADO or DAO by using a SQL DDL statement:

This command tells Access to change the field specified in the tablename and fieldname parameters to a 32-bit integer data type (INTEGER), which is the SQL DDL equivalent to an Access Long data type.

Saving the relationships

Once I’d determined what fields I wanted to change the data types for, I needed some way to save the information about the relationships defined against the field. Although I could have done this using DAO, I felt that it was best to continue to use only a single data-access library. I decided that I’d use the information provided by ADOX to store the relationship information for the fields I wanted to modify.

This information isn’t as easy to find in ADOX as it is in DAO. In DAO, you can use the Relations collection in a Database object to analyze the relationships in an Access database. In ADOX, relationships are stored in a Table’s Keys collection. However, the information about a relationship is stored in the table with the foreign key, not in the table whose primary key you want to change. This means that I needed to know what tables were related to the field being changed before I could get any information about the relationship itself. Scanning all of the Keys in all of the tables in the database in order to obtain this information for each AutoNumber field didn’t really seem like a very efficient way to retrieve this data. Fortunately, I didn’t have to do this.

ADO connections have an OpenSchema method that’s used to return a recordset containing information about the design—or schema—of the database. This method takes three parameters, two of which are required: the schema ID to be returned (a value defined in the SchemaEnum data type) and an array of filter arguments for that schema. The one optional argument is only used if the value for the schema ID argument is adSchemaProviderSpecific (meaning that you’re asking for a provider-specific schema). One of the values that the optional argument takes is adSchemaForeignKeys. This returns a recordset of all of the primary/foreign key relationships in the database and their main properties. I used the OpenSchema function to return all of the relationships for the table that was about to change. I then filtered this recordset locally so that it would only list those relationships defined against the primary key field I needed to change. While I could have used the OpenSchema function to filter the results before returning them, I found that local filtering was more reliable.

In this schema recordset, each row includes the key name and foreign table name. This information allowed me to access the key information for the related fields so that I could store the relationships in working tables. However, there’s one problem. Key objects not only define relationships in a database, but also define unique indexes in a table. The names of these objects can, and often will, overlap. Any table in Access that’s the foreign table in a one-to-one relationship with another table will have a relationship and a unique index with the same name. This isn’t a problem in Access, but if you’re accessing these items through named keys in ADOX, there are actually two items in the Keys collection with the same name. You can never be sure which one you’ll actually get when you try to refer to a Key by name.

Fortunately, each Key object has a Type property. The Type property contains a value of type KeyTypeEnum. If the key is a foreign key, the value of the Type property is adKeyForeign. By checking this property, I could ensure that I was actually dealing with the key I needed to log. Otherwise, I needed to loop through each Key in the Table object, checking the Name and Type properties to determine whether it was the Key I needed to log, and then save the Key information. You can see the code that does this in SaveKeyInfo in basIdentity in the sample database.

With each key that I needed to save, there were two pieces of information that needed to be recorded. The first piece that needed to be documented was the key information itself, including its Type, Name, PrimaryTable, PrimaryColumn, ForeignTable, UpdateRule, and DeleteRule. The second piece of information that needed to be documented was the fields that were referenced in the relationship. This is especially helpful in relationships that are defined on more than one field. Although it’s extremely uncommon to do this with AutoNumber fields (there’s usually only one field in the relationship), the possibility still needs to be accounted for. In order to document the field information, I needed to log the characteristics of each Column object in the Key’s Columns collection.

When dealing with Column objects defined against a Key, there are five pieces of information that are pertinent. The ForeignTable, ColumnName, and RelatedColumn properties help to define the relationship between the tables. However, in order to be able to re-create the relationship, I also needed to preserve the order in which the Columns were set in the relationship. Because ADOX doesn’t provide this as a property of the Column object, I needed to create a simple custom counter that was incremented with each Column and used to order the data when it would be retrieved later on. The final piece of information I needed to save for the Key’s Columns was the name of the relationship. This was done simply to ensure that I could relate the column information back to the key information later on.

All of this information is stored in tables called tblKey (for the Key objects) and tblKeyColumns (for the Key’s Column objects). You can see the results in the SaveKeyInfo function in the sample database.Deleting the relationships

Now that I had the information I needed to document the state of the Keys before I made any changes, I was ready to delete the relationships that were preventing the data type of my AutoNumber field from being modified. In order to do this, all I had to do was loop through the saved Key information and delete each Key using the saved name in the table. This can be done by calling the Delete method of the Keys collection in a Table object, passing in the name of the Key to delete, like this:

You might be wondering why the delete code doesn’t need to loop through the keys to ensure that it’s deleting the right key, as I had to do in logging one-to-one relationships. The reason behind this is that Access defines the relationship in such a way that the unique index and the relationship are completely dependent on each other and actually refer to a single DAO object. Although they’re interpreted as separate objects by ADOX, deleting either one of the keys in the table will delete both the index and the relationship in the Access database.

I could now run the ALTER TABLE command to change the identity fields to standard Long fields. Once this was done, I needed to rebuild the relationships from the saved data.

Rebuilding the relationships

Rebuilding the relationships was probably one of the easiest things to do. All I needed to do was use the data I’d saved about the relationships to rebuild the key and column information originally stored in the database. You can see the code to do this in the RebuildKeyInfo procedure in basIdentity. This code loops through each record I’ve saved to tblKey and then uses the information stored in tblKeyColumn to append each of the original fields to the Columns collection in the Key. The newly created Key is then appended to the Keys collection of the table using the Append method of the Keys collection.

Retrieving the last identity

The final challenge I wanted to overcome with identity fields was to be able to determine through code the last identity value added to a table. Knowing the value of the last record that you added is essential when that value must be put in the foreign key field of the next record that you’ll add. You could turn around and retrieve the record that you just added to find the value. There are occasions when even this workaround isn’t possible. The most common times are when I have a DAO recordset open in Append Only mode, or when I insert a record into a table by calling an append query from my code.

Access 2000 has added support for a SQL statement that was previously only available in SQL Server. The statement is:

This statement will retrieve the last identity value added to any table using the current connection. This means that it will only work against the same DAO Database object or ADO Connection object that was used to insert the record in the first place. It always returns 0 in a standard Access query run through the IDE, so you should only use it from code.

For example, if I added a new record to tblEmployee and I wanted to retrieve the new EmployeeID identity value created, I’d have to use code like this to do it:

The value of the new ID would now be stored in the variable lngNewID. This same SQL statement could be used from DAO, and it could be used after calling an append query that added a record to a table that contained identity values.

Putting it to use

If you take a look at the sample database in the Source Code file, you might notice that it’s actually an Add-In database. You can install this Add-In using the Add-In Manager in Access, which gives you a menu item in the Tools | Add-Ins menu called Modify Identities. The Add-In allows you to quickly change the seed and increment values of all of your identity fields, and even switch the identities to Long fields. There’s even a bonus function called FillIdentities. This function uses ADOX to scan all Tables and Columns to find any Column objects with the Autoincrement property set to True, effectively building a list of all of the AutoNumber fields in your database. The details of each identity field are presented to you so that you can modify them.

If you’re still wondering why you might want this much control over your AutoNumber fields, consider a bug that exists in Jet 4.0 prior to service pack 5. If, when you inserted a record in a table, you provided a value for an identity column that was less than the current seed in the table, the seed would be reset to one more than the record you inserted. So, if you had a table with identities from 1 to 10, excluding 5, and then you inserted a record with an ID of 5, the table would attempt to start numbering the records at 6. This could violate your primary key, or even corrupt your data. This utility allows you to see and fix the problem before any damage is done.

If you’re using AutoNumber fields in your database, odds are you’re using them as primary keys in your database. The foundation of the relational database theory begins with assigning primary keys to tables in order to uniquely identify records. Managing those AutoNumber fields is critical to maintaining your data integrity. The tools that I’ve described in this article and supplied in my Add-In will give you all of the control you should ever need.

Your download file is called  102sinclair.ZIP

Garry’s 2009 Autonumber Story

I was asked to debug a database where a new record kept “seemingly” overwriting an existing record in a complex DAO centric application written back in the 1990’s. So in I went, form after form and then into popup forms and still I couldn’t trace why the numbers were being overwritten. Then my first clue, the table had an autonumber field and the field was not a Primary or even unique key.  So on fixing this, still the problem kept occurring. I thought that the old VBA software was generating the unusual records because Autonumber never overwrites in own exisiting autonumber (Does it). Well in the end I found out that it did, not sure why

So I set about find code to reset the AutoNumber

Open the backend, make sure that the AutoNumber field has a Unique Index. In a new module, add a reference to Microsoft ADO Ext. for DDL and Security and then add the following

call using the following (I did this in the Immediate Window)

Possibly remove the module and the ADOX reference

See Switching Primary Indexes and Field Index Gotchas

Posted in Design and Tables | Leave a comment

Decomposing a Badly Designed Table

Many users have decided that they don’t need the full power of an RDBMS and that Excel is the right choice for storing data. It’s not a bad choice, until the application grows to the point that they do need a “real database.” Frank Kegley shows you how to handle the typical problems of converting from Excel to Access.

Recently, a client told me that she was tired of using Excel’s “database” capabilities. She asked me to take a look at her list with a view to converting her Excel spreadsheet to one or more tables in an Access database. As you can see in Figure 1, there were duplicate values in many of the columns. There were also typographical errors and inconsistencies, such as different OrderDate values for the same OrderID value. Ensuring the correctness of her data was taking up more and more of her time. Reporting had also become more of a chore than she liked. Continue reading

Posted in Design and Tables | Leave a comment

We Get Letters

This month, Doug Steele passes on feedback he’s received from readers about past columns. Along the way, he shows how different priorities cause different programmers to come up with different solutions for the same problem.

1) See the Embedded quotes in SQL statements question

The “Handling groups of controls May column dealt with ways to treat controls as a group, usually for the purposes of making them visible or not…

Stephen Charles wrote with an interesting twist on using the Tag property of each control to allow multiple grouping. He assigns each group a binary value (1, 2, 4, 8, 16, 32, and so on), sets the tags equal to the sum of the binary value(s) of the group(s) to which it belongs, and then uses AND to do a bitwise comparison to determine whether or not each control is in the specific group of interest. For example, he might have three groups that he’s going to use on a particular form, so he’d think of the groups as 1, 2, and 4. It’s fairly straightforward to see that controls that should only be visible as part of the first group would have a Tag value of 1, those that should only be visible as part of the second group would have a Tag value of 2, and those that should only be visible as part of the third group would have a Tag value of 4. However, if a particular control should be visible as part of both the first and second groups, its Tag value would be 3; if it should be visible as part of both the first and third groups, its Tag value would be 5. This works for all of the possible combinations.

In a specific example he sent, he used the same form to capture input for a number of different reports. He had a combo box that listed the various reports of interest, plus the combo box had a number associated with each report (in a hidden column in the combo box) that he could use to indicate which controls he wanted visible for each report. The code associated with the combo box’s AfterUpdate event was something like this:

Stephen even sent an example of how this works, which I’ve included in the downloadable database associated with this month’s column.

This strikes me as a fairly straightforward approach. In fact, I think it’s a lot simpler than the approach I used of having the relationships between the controls and the groups to which they belonged stored in a table. Thanks for the suggestion, and for the sample, Stephen.

Jay Selman wrote to describe how he makes groups of controls disappear. He places all of the controls that he may want to disappear on a tab page control. When he sets the tab page control’s visibility to False, all of the controls on the tab page disappear with the tab. This works for his situation because the controls are normally grouped together on the form anyway. In addition, if he doesn’t want a tab control to appear when the controls appear, he just sets the Tab Style to None and the Back Style to Transparent so that the tab control itself is hidden.

The “More Dragging Around and Cuing Banners” February column addressed how to simulate cue prompting…

Chris Weber, whose articles you’ve doubtlessly read in past issues of Smart Access, was disappointed that it was “only applicable to unbound text boxes, rich text boxes, and combo boxes in Access,” and also thought that the code-heavy implementation didn’t really explore the object properties available to Access developers. So he took it upon himself to try a simpler approach.

He felt there had to be an easier way and, to be truly useful, a way to have cues within bound controls. His first thought was that the cues could be implemented as labels placed beneath transparent controls. When the control got focus, it would, by default, appear non-transparent. When the user left the control, if the control was Null, its Back Style should remain Transparent allowing the cue to show through. If not, its Back Style should be set to Normal, obscuring the label. To try this out, he decided to work with the Customers form in the Northwind database.

The first step was to change the form to Standard style using the Format | Autoformat menu selection. He then highlighted all of the labels and set the Back Color property in the property sheet to white (16777215), the Border Style to Transparent, and the Fore Color to a dark grey (10263706). Next, he held down the Shift key, lassoed the bound controls, and set their Back Style to Transparent. Finally, to set the cues in each label, he did the same as I had in my example: He changed each to “Enter the fieldname” and dropped the colon from each. After these changes, the form looked like the one in Figure 1.

Figure 1

The next step was to align the labels behind their corresponding controls. To get a perfect fit, Chris first used the Format | Size | To Widest menu selection on each pair (cursing all the while that Access doesn’t have a Ctrl+Y, Repeat Formatting, like Excel or Word!). Then, he used Ctrl+A to select all of the controls and chose Format | Size | To Tallest so that they’d all be the same height. To align each of the labels with its respective control, he used a little-known feature of Access: If a label is nudged behind a control, you can get the label to align perfectly behind the control through the Format | Align menu selection. If the label isn’t already overlapping, the Align menu choice will just slam as closely as possible to the other control. Therefore, pushing each label a bit behind its parent control, selecting each pair, and then selecting Format | Align | Right lines them all up perfectly.

You can see the results in Figure 2. While the right-hand version (which shows what the form will look like for a new record) looks good, some work is still required when the form is opened to an existing record (the left-hand version).

Figure 2

It was at this point that a fundamental difference in philosophy between Chris and me became evident. As you’ve probably gathered from reading my columns, I’m a code jockey. I prefer using code to accomplish virtually everything. I feel it lets me know exactly what’s going on. I also feel it makes the application easier to understand for others who have to support it: They can see that there’s code causing whatever is happening on the form, rather than having to look for specific properties that have been set. It also provides me with a space to write comments. Chris, on the other hand, feels that writing code should be a last resort. Consequently, his first approach to solving this problem was to try and use Conditional Formatting.

Note: In Access 2007/2010, Conditional Formatting is only found Layout view.

He began by resetting the Control Source of the Company Name field and then choosing Conditional Formatting under the Format menu. He wanted the Back Color of the Company Name control to be white and non-transparent if the control has data. Figure 3 shows what he attempted.

Figure 3

He then used the Format Painter on the toolbar to transfer this condition to all of the other controls and updated the name of the field in brackets for each control. However, no matter how he tried, he couldn’t get the expression to evaluate properly: The controls always appeared blank (non-transparent with a white background). In fact, even after deleting the conditions, the controls still appeared blank. It turns out that setting the condition for a change of Back Color automagically changes each control’s Back Style from Transparent to Normal.

Fortunately, the code required to make the Back Style of a control Normal when it has data and Transparent when it doesn’t is pretty straightforward–essentially a single line of code! If you look in the Help file for details about the Back Style property, you’ll see that its values are either Transparent (0) or Normal (1). All that’s required is to set the Back Style property to the appropriate value, depending on whether the control is Null or not. The code that does this checking must be in the form’s Current event in order to set the properties properly as each new row in the recordset is read, as well as in the AfterUpdate event of each control.

By setting the Tag property of each of the controls for which you want this effect to be used to the same value (Chris used “CueControl”), you can write a generic function to be used for the form’s Current event. Just set the form’s Current event to call Chris’

While that works, my preference is to not depend on True being -1. So I’d rewrite the function like this:

Similarly, you can write a generic function to use on each control’s AfterUpdate event (in other words, select all of the data-aware controls and assign =CueControl_AfterUpdate() to their AfterUpdate events). Chris’ code function is:

Figure 4

Notice how much smaller the form is compared to the original. For some forms, you can omit the labels and save a lot of space. With screen space at a premium, fewer interfaces can contain more data, which can simplify navigation in your application.

Good work, Chris! It’s a big improvement.

Your download file is called   Steele_WeGetLetters.accdb
Posted in Access Controls | Leave a comment

Validate input such as ZIP codes and postal codes

In Access Answers, Doug Steele addresses commonly asked questions from Access developers. This month, he looks at using Regular Expressions in Access, as well as problems dealing with time values .

Is there a simple way to validate input such as ZIP codes and postal codes?

While it’s possible to write a function that uses string functions such as Left, Mid, and InStr to validate your input, it may be more appropriate to take advantage of the Regular Expression capability available through the object library created, originally, for VBScript. Postal codes are generally great candidates for Regular Expressions. For instance, US ZIP codes are either “NNNNN” or “NNNNN-NNNN”, while Canadian postal codes have the form “ANA NAN” (where A is an alphabetic character, and N is a numeric character). Continue reading

Posted in Access Controls | Leave a comment