Loop through the found set and process the records:Īlso, whereas the GetNthRecord method must be sorted on the Zip field to work, the remaining three methods do not require sorting to work… in fact as we’ll see in just a minute, they’re much faster when the found set is unsorted. Push the contents of SummaryListZip into a variable, $$summaryList:ģ. Incidentally, you can easily view the contents of SummaryListZip by clicking here:Ģ. …to generate a stack of Zip codes corresponding to the current found set and sort order (or lack thereof). Use a summary list field, SummaryListZip… Here is the basic approach used in the remaining three methods:ġ. Unfortunately it’s not going to win any performance prizes. This was my initial stab at solving the challenge… the “off the top of my head” suggestion described previously. Generate a found set (there are 20K records in the demo, so that’s what you’ll get if you click “All”) Unless otherwise specified, all times referred to in this article refer to tests conducted on a local file. I have found performance results to be fairly consistent regardless of the hosting setup… e.g., in my testing, the GetNthRecord approach takes 16 seconds to process 5K records across a WAN, and 15 seconds to do so locally. In this demo, I found the timings of the different methods to be consistent, regardless of which order I ran the tests in, or whether I quit and restarted FileMaker between each test.Īnother consideration is whether the files are hosted (across a LAN or WAN) or local. If your found set is small, say 1K or 2K records, it won’t matter much which method you use, but as the found set size increases, it becomes clear that each method is faster than its predecessor.Īlso, when doing speed comparisons in FileMaker, one needs to consider whether caching is skewing the results. #Filemaker pro 10 contents longer than field downloadI encourage you to download it, experiment, and add your own methods or variations… perhaps you’ll come up with a faster approach, in which case, needless to say, I hope you’ll post a comment at the end of this article. But I had a nagging feeling there were better-performing ways to go about this, and today’s demo file, Anti-deduping, part 1, presents four different methods. If both tests are negative, omit, otherwise go to next record (and of course exit after last).Īs it turned out, it was a one-time cleanup task, and my suggestion was good enough. #Filemaker pro 10 contents longer than field codeSort by Zip code, then loop through the found set from top to bottom… using GetNthRecord() test the current record’s Zip code against the previous record and also against the next record. #Filemaker pro 10 contents longer than field updateUpdate : it turns out that there is a way to reliably use contrain with the ! operator from within a found set - see Ralph Learmont’s technique + great demo + explanation here - Successfully Find Duplicate Values Within A Set Of Records And constrain won’t help here because it doesn’t play nicely with the ! operator. However, this trick won’t work when starting from a found set rather than all records. If the challenge had been for all records in the table, one could simply search on ! (find duplicate values) in the Zip code field. Note that the challenge was starting from a subset of records. In other words, keep the records whose Zips appear multiple times and banish the others. When a question of this nature arises, it’s typically some variation on “How can I remove duplicate entries?” But this was the opposite: For a given found set of customers, how can I omit those whose Zip codes only appear once in the found set? Recently there was a question about cleaning up a found set on one of the FileMaker discussion forums.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |