Ref #36912 - Added benchmarks for Q and Q.create()#98
Ref #36912 - Added benchmarks for Q and Q.create()#98amakarudze wants to merge 5 commits intodjango:mainfrom
Conversation
| Content.objects.all().delete() | ||
|
|
||
| def time_q_create(self): | ||
| content_list = list(Content.objects.all()) |
There was a problem hiding this comment.
Hello! A quick note here. I'm not so familiar with how django-asv works, but it looks like this database call is in the snippet being benchmarked. I think your benchmark needs to ensure it only calls Q() or Q.create().
There was a problem hiding this comment.
Thanks Jacob, I was unsure whether database calls were necessary or not. I am still trying to find my way around django-asv.
sarahboyce
left a comment
There was a problem hiding this comment.
Thank you @amakarudze! ⭐
I have a few minor suggestions, after addressing these would you mind squashing to a single commit? ❤️
| ] | ||
|
|
||
| def time_create_no_args(self): | ||
| return Q().create(children=self.children_no_args) |
There was a problem hiding this comment.
| return Q().create(children=self.children_no_args) | |
| for _ in range(100): | |
| Q().create(children=self.children_no_args) |
Suggestions:
- in some benchmarks we run them multiple times to exaggerate changes, we might want to do this with all of the benchmarks
- I think a return is not necessary and doesn't match the format of other benchmarks
There was a problem hiding this comment.
I think the loop makes a lot of sense given the main concern is that Q.create is sometimes called in loops
| from ...utils import bench_setup | ||
|
|
||
|
|
||
| class TimeQInit: |
There was a problem hiding this comment.
These appear to be missing the Q calls?
There was a problem hiding this comment.
Looking at other recently added benchmarks (e.g. #95), we do not need to update the results/benchmarks.json file. I have a feeling we should ideally git ignore this file
sarahboyce
left a comment
There was a problem hiding this comment.
@amakarudze I have added more comments
I feel what we are missing are the results of these benchmarks against main and against the new branch with the validation
| @@ -0,0 +1,33 @@ | |||
| from ...utils import bench_setup | |||
There was a problem hiding this comment.
We actually don't need this benchmark for the ticket I imagine. I think we are only evaluating the cost of having the validation added to Q.create
| ] | ||
|
|
||
| def time_create_no_args(self): | ||
| return Q().create(children=self.children_no_args) |
There was a problem hiding this comment.
I think the loop makes a lot of sense given the main concern is that Q.create is sometimes called in loops
Added benchmarks for Q.create() in benchmarks/query_benchmarks/query_utils_q_create and Q in benchmarks/query_benchmarks/query_utils_q to be used in Ticket 36912 for comparison on whether to add validation to Q.create().