To find coincidences between two disordered lists, sorting the lists is one approach, but it can be inefficient with large datasets. Using data structures like `std::unordered_set` in C++ or sets in Python can provide faster lookups, with average time complexity of O(n). While sorting has a time complexity of O(n log n), using hash sets allows for nominally linear performance, though worst-case scenarios can lead to quadratic time. Implementing multiple methods and testing their performance is recommended, as cache effects can influence efficiency. Ultimately, the choice of algorithm should prioritize clarity and maintainability before optimizing for speed.
#1
ORF
169
18
Hello
I need to find coincidences between items in 2 lists. The lists have disordered items, so the first idea I had was to sort the items in the correct order and then finding the coincidences would be easy. The main problem is the big amount of items in the lists. I was using "double-linked lists" in order to sort the items.
Another idea is to compare each item of one list with all the items in the other list.
The two methods seemed to me horribly slow, so my doubt is: is there a better algorithm to find coincidences between two lists?
You could put all the items of the first list into a set (e.g. a hash set) and then check if any of the items of the second list are part of that set.
But that is going to be slower than the sorting approach. How long are the lists? And do you just want to find all the elements that are contained in both lists regardless of their frequency?
Although I cannot talk in the abstract about any kind of list and process on them, a good sorting algorithm like quicksort or even mergesort, have good running times. Of course, an amortized analysis must be done with the specific data in the lists taken into account, but if they are long enough, good running time sorting algorithms will generally do better than other approaches. Direct comparison between the two lists, will be very inefficient (n2 sort of thing).
You could put all the items of the first list into a set (e.g. a hash set) and then check if any of the items of the second list are part of that set.
Using Python you can put the members of the two lists into two sets, and then find the intersection of the two sets using the intersection() method.
#7
Jarvis323
1,247
988
DrZoidberg said:
You could put all the items of the first list into a set (e.g. a hash set) and then check if any of the items of the second list are part of that set.
But that is going to be slower than the sorting approach.
I believe that the set approach is almost always going to be faster. Assuming you use a hash table based set, the amortized time complexity would be O(n). Sorting alone is in O(nlogn).
In C++ the thing to use is std::unordered_set, or std::unordered_map if you want to count occurrences. In Java it's HashSet, HashMap. In python, set, dict.
Wow, I didn't expect so many answers, thank you! :D
I can use C/C++ (I'm still learning C++ 11), Python or Visual Basic, but I think C++ is the fastest.
@DrZoidberg: that idea works fine with short lists, but with the long ones... The other thing: the items will be no repeated in the same list.
@ Jarvis323: the "find" function is fast enough?
Thank you for your time :)
Greetings!
#9
Integrand
66
24
First implement in the clearest, most maintainable way, and only then address performance as needed. In C++, the clearest expression of this problem using standard algorithms may be with std::set_intersection (http://en.cppreference.com/w/cpp/algorithm/set_intersection). Note that despite the name it is not limited to std::sets.
If you need to optimize, test. Today, cache effects make it much more difficult than it once was to predict optimal solutions based only on Big-O. In your testing of alternatives, favour compact representations of the data, to maximize benefit from cacheing. For example, try using std::vector rather than std::list.
Last edited:
#10
Silicon Waffle
160
203
PHP:
std::set<int> vec1, vec2, common;
//prepare 2 sets
for (int i = 0; i < 10; i++)
{
vec1.insert(i);
if (i % 2 == 0)
vec2.insert(i);
}
// Make the common set
for (std::set<int>::iterator it = vec2.begin(); it!=vec2.end(); it++)
{
if (!vec1.insert(*it).second)
{
common.insert(*it);
}
}
#11
ORF
169
18
Hello
@Integrand: I did not know the library "algorithm", thank you! :D @Silicon Waffle: in the page http://en.cppreference.com/w/cpp/algorithm/set_intersection (just for accuracy, the proper function in order to look just for coincidences between 2 lists would be set_intersection; I think).
It's explained that the complexity scales with N1+N2. With that code, I think you pass thought all the items in both lists, so the complexity scales with N1*N2. In my case, N1 and N2 will be thousands of millions of items.
I need to find coincidences between items in 2 lists. The lists have disordered items, so the first idea I had was to sort the items in the correct order and then finding the coincidences would be easy. The main problem is the big amount of items in the lists. I was using "double-linked lists" in order to sort the items.
Based on subsequence answers, you're using C++. In C++, it's generally better to use a std::vector rather than a std::list. An exception would be if you are inserting or deleting anywhere but the end. But if you're just building the container up one element at a time, use a vector.
It's probably faster here to use std::unordered_set s(list1.begin(), list1.end()). The implementation might take advantage of the size of the container to set the bucket size.
Personal preference, I would use if (s.count(item) != 0) { ... }. (Even better would be if there was a std::unordered_set::contains member function that returns a boolean.)
Integrand said:
First implement in the clearest, most maintainable way, and only then address performance as needed. In C++, the clearest expression of this problem using standard algorithms may be with std::set_intersection (http://en.cppreference.com/w/cpp/algorithm/set_intersection). Note that despite the name it is not limited to std::sets.
It is however limited to objects that are sorted, and sorting is an O(N*log(N)) algorithm.
Today, cache effects make it much more difficult than it once was to predict optimal solutions based only on Big-O. In your testing of alternatives, favour compact representations of the data, to maximize benefit from cacheing. For example, try using std::vector rather than std::list.
This is key. It might well be faster to use a fast O(N*log(N)) algorithm (e.g., sorting a pair of std::vectors in-place, and then using std::set_intersection to find the intersection) than it is to use a slow O(N) algorithm (e.g., constructing a std::unordered_set from one of the containers, and then using a loop to find the intersection). There's also the risk of encountering worst case performance using std::unordered_set. Constructing an unordered set from a container is nominally linear in the size of the container, but it can be quadratic.
If performance matters (and from other posts by the OP, it appears performance does matter), the best thing to do is to implement multiple approaches and see which wins. But the first thing to do is to test whether performance does matter. Implement it simply and test. If there aren't any performance issues, you're done.
Silicon Waffle said:
C:
// Make the common set
for (std::set<int>::iterator it = vec2.begin(); it!=vec2.end(); it++)
{
if (!vec1.insert(*it).second)
{
common.insert(*it);
}
}
There's no need to use if (!vec1.insert(*it).second). Either use Jarvis323's if( s.find( item )!= s.end()), or my suggested alternative, if (s.count(item)!= 0).
ORF said:
in the page http://en.cppreference.com/w/cpp/algorithm/set_intersection (just for accuracy, the proper function in order to look just for coincidences between 2 lists would be set_intersection; I think).
It's explained that the complexity scales with N1+N2. With that code, I think you pass thought all the items in both lists, so the complexity scales with N1*N2. In my case, N1 and N2 will be thousands of millions of items.
You are forgetting that sorting is O(N*log(N)), and your lists are unsorted. You'll have to sort them before you can use std::set_intersection, so the cost is dominated by the two sorts (O(N1*log(N1) +N2*log(N2)). Compare that to the nominally linear cost of constructing a std::unordered_set from a container, and the nominally O(1) cost of looking up an element in an unordered set. The whole algorithm using an unordered set is nominally O(N1+N2). But ... that's assuming nominal behavior. There is the possibility of running into worst case performance, in which case a set-based algorithm does indeed becomes O(N1*N2).
#13
ORF
169
18
Hi
I've tried with different containers, and the best option (at least for me) is a map (C++).