justin = {
main feed
,
music
,
code
= {
cockos
,
reaper
,
wdl
,
ninjam
,
jsfx
,
more
}
,
askjf
,
pubkey
};
Ask Justin Frankel
No reasonable question unanswered since 2009!
Suggested topics: programming, music, sleep, coffee, etc.
Note: please do not ask questions about REAPER features, bugs or scheduling, use the
forums
instead.
Name:
Ask:
Human (enter yes):
[
back to index
] | [
unreplied
] | [
replied
] | [
recent comments
] | [
all
]
Question:
Can you describe l1/l2 cache and how a programmer can use tools to find out when he's having cache misses?
Asked by Will (24.234.85.x) on September 4 2013, 9:32pm
Reply on September 5 2013, 12:53am (edited at September 5 2013, 12:57am):
The thing to keep in mind is that you have the ability to read and reread a certain amount of memory and it is VERY fast, but if you start trying to access a larger set, it will end up being slower. So if you have some algorithm that is O(N), say, like a radix sort, and when you double N and you measure a large jump in execution time (by a factor of significantly more than 2, for example), you might be thrashing the cache.
I could go into more detail on the implementation of caches, I suppose, but it would all be specific to whatever CPU you happened to be using. There are "cache lines" which (if I remember correctly) represent the minimal unit that the cache can represent; if you access a single byte of memory, the cache line that contains that byte will be valid (meaning you can effectively access those bytes with much less overhead than another region of memory which is not in cache). Then there's associativity, which Chrome tells me is misspelled as I type this, and can be read about
on Wikipedia
.
Comment:
Your Name:
-- Site Owner's Name:
(for human-verification) Comment:
[
back to index
] | [
unreplied
] | [
replied
] | [
recent comments
] | [
all
]
Copyright 2025 Justin Frankel
.
|
RSS