I’ve been looking for methods to improve Emacs performance especially with my configuration being over >3k. I’m not particularly interested in startup-time since I never close Emacs. Here’s what I found so far
(setq package-native-compile t
gcmh-high-cons-threshold 100000000
gc-cons-threshold 100000000
scroll-conservatively 101
jit-lock-defer-time 0
large-file-warning-threshold nil)
(add-hook 'after-init-hook #'(lambda () (setq gc-cons-threshold (* 100 1000 1000))))
(defvar gc-timer nil)
(defun salih/maybe-gc ()
(let ((original gc-cons-threshold))
(setq gc-cons-threshold 800000)
(setq gc-cons-threshold original
gc-timer (run-with-timer 2 nil #'salih/schedule-maybe-gc))))
(defun salih/schedule-maybe-gc ()
(setq gc-timer (run-with-idle-timer 2 nil #'salih/maybe-gc)))
(salih/schedule-maybe-gc)
I can tell that I’ve noticed some improvements.
If not in startup, what performance deficits and improvements have you noticed? How do you measure or notice the improvements?
Define fast. Speed is relative. Your fast and my fast may not be the same. But my answer is lazy load everything.
> I can tell that I’ve noticed some improvements.
I can tell this claim is worthless without data.
Improve the performance of what?
I can tell that I’ve noticed some improvements.
I guess you haven’t run your Emacs for a long time, because ~100 megabytes of allocated memory will take its time to GC check. With that amount of RAM, you will probably notice Emacs “stuttering”, like freezing for small periods of time when using it normally. The bigger the gc-cons-threshold value, the longer time it will take for GC to check the memory. Emacs does not have an incremental and multithreaded GC, so your Emacs will appear as frozen to you.
(defun salih/maybe-gc () (let ((original gc-cons-threshold)) (setq gc-cons-threshold 800000) (setq gc-cons-threshold original gc-timer (run-with-timer 2 nil #'salih/schedule-maybe-gc))))
Have you even looked at gc-cons-threshold value; after your idle timer has finished the work? Looking at your code, I believe you will be surprised because it does not seem to be what you think it will be.
What is the point of the first setq there?
In the third line, you are setting the value to hardcoded ~800 kb (I think it is the on 64-bit systems, but it is not so important), just to immediately override it with 100 meg as you defined it in the previous code piece (third line of your code), one after the gcmh. Secondly, what is the reason to use gcmh package if you are going to do it all manually :). GCMH will do exactly the same thing you are doing manually in that example if I am not mistaken; I don’t use that package myself, but someone wrote it for the purpose of automating that little hack you are trying to make there. IMO, either use that package and be happy, or do it all manually.
What is the purpose of setq-ing gc-timer to run your function in that let-body (last line) which appears to just do exactly the same - you are again doing the same thing in your salih/schedule-maybe-gc. You are just telling Emacs in an infinite loop to set up a new timer when it is idle. If you believe you are telling Emacs to actually garbage collect something, you are wrong; you are just setting up another timer.
In other words; your code does not do what you believe it does; it is rather plain wrong, in other words, buggy, to put it mildly.
I can tell that I’ve noticed some improvements.
I can tell you haven’t, you just don’t know about it. When other posters here told you to benchmark they were correct.
we are not in an academic seminar, such anecdotal statements should be authentic enough.
We don’t benchmark because of being academics, but because of ourselves. If we won’t to improve something in whatever terms, cpu execution time, memory usage, number of resources allocated (timers, files, sockets etc.) you have to measure. You can’t know for sure if you don’t measure, there is no way. Your computer can be doing stuff, your application can be doing stuff, and so on. Modern computer systems are not deterministic in terms that hardware usage being exactly the same each time you run an application. Execution time is highly dependent on your OS and CPU scheduler(s), memory usage patterns and so on, some of the things your application usually does not influence explicitly.
Without measuring you are walking with a blindfold. However, you seem to have other problems than just benchmarking your stuff; you should really read the manual about stuff you are trying to improve or change, use built-in help; C-h f/v to see what stuff does and try to understand it. Read whichever blog posts you have found again and reflect carefully on what they say and why. Just blindly copying stuff without understanding it results in stuff like your code above.
Finally, to answer your original question, it all depends on how you wish to use your Emacs. What might be fast for one usage pattern, might not be fast for another one. Again, you will have to know what you are doing and to measure for your particular use-case.
emacs -q
Build emacs yourself and enable the option for natively compiling the lisp to bytecode
I had the package
yascroll
active, and it was fine on most files but org-mode was quite slow. I disabled that as well as svg-tag-mode etc-
use a POSIX OS (i.e., linux, unix, or macOS) as their file system and I/O management are generally more performant than Windows. NOTE: macOS I/O is worse than linux/unix, but better than windows; and their ARM chips generally perform better in their OS as well.
-
I use 29.1 with native compilation enabled, and I generally use the Full Ahead of Time compile when i can, so that all the built-in/included by default emacs code is already compiled.
-
I use a lot of tricks from DOOM Emacs, its hard to list them all, but many of them are good. Take a look at their early-init.el, init.el, and core files to see what they do to speed things up in various cases.
-
I used Elpaca over straight/use-package for package management, as it was designed to be async from the ground up, so it can do a lot in parallel, well, emacs version of parallel.
-
I use a lot of built-in hooks, package-hooks, and custom-hooks, to only load packages when they are actually needed, and try to avoid global modes when possible.
my config is a bit of a mess, but if you are curious, you can see it here:
Also if you need to use windows, I found out recently that even GUI apps can be run under WSL. That seems to help the speed problem (I only tried magit and basic file handling and it is much closer to the speed under Linux)
yes, definitely use WSL if you are on windows! Emacs goes from useable, to very performant when you setup WSL for GUI :)
-
(advice-add 'jsonrpc--log-event :override #'ignore)
Can you say what’s going on here?
I’m assuming it’s avoiding doing work, but from which package? Something in core Emacs?
I’m sure there is a way to do this in vanilla but doom emacs cli lets you compile so I will compile all the packages with that
switch to neovim
Upgraded to 29.1
For me, the main issue was startup speed from my customizations. This I sped up by:
- Making use of
emacsclient
over new emacs sessions. - Combining scattered customization files into a single large
emacs.el
file. - Use
defvar
andautoload
overrequire
,load
,eval-after-load
.
The last one is mostly to allow compiler- and flycheck warnings to work, without prematurely loading dependencies of my customization code.
- Making use of
LSP-mode: By following instructions here: https://emacs-lsp.github.io/lsp-mode/page/performance/
For the rest: M-x profiler-start, Mx profiler-stop, then M-x profiler-report.