-
Notifications
You must be signed in to change notification settings - Fork 33
Caching nostr dms #321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Caching nostr dms #321
Conversation
Thanks, due to limited time it might take me so e time to review this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 5 out of 6 changed files in this pull request and generated 1 comment.
Files not reviewed (1)
- src/Angor/Client/wwwroot/appsettings.json: Language not supported
Comments suppressed due to low confidence (3)
src/Angor/Shared/Services/SignService.cs:17
- [nitpick] The variable name '_subscriptionsHanding' appears to be misspelled; consider renaming it to '_subscriptionsHandling' for consistency.
private readonly IRelaySubscriptionsHandling _subscriptionsHanding;
src/Angor/Shared/Services/SignService.cs:71
- [nitpick] Similar caching event processing is repeated in multiple methods; consider refactoring this common logic into a helper method to reduce duplication and improve maintainability.
var cachedEvents = _nostrEventCacheService.GetCachedEvents(subscriptionKey);
src/Angor/Shared/Services/RelayService.cs:108
- [nitpick] Processing of cached events is repeated in several DM lookup methods here as well; extracting a common caching processing function could help reduce code duplication.
var cachedEvents = _nostrEventCacheService.GetCachedEvents(subscriptionKey);
const int maxCacheSize = 100; | ||
if (events.Count > maxCacheSize) | ||
{ | ||
_logger.LogInformation("🧹 Trimming cache for subscription {SubscriptionKey} from {OldCount} to {NewCount} events", | ||
subscriptionKey, events.Count, maxCacheSize); | ||
events = events.OrderByDescending(e => e.CreatedAt).Take(maxCacheSize).ToList(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The cache trimming logic uses a hardcoded maxCacheSize; consider abstracting this value into a configuration setting or a named constant so it can be easily adjusted in the future.
const int maxCacheSize = 100; | |
if (events.Count > maxCacheSize) | |
{ | |
_logger.LogInformation("🧹 Trimming cache for subscription {SubscriptionKey} from {OldCount} to {NewCount} events", | |
subscriptionKey, events.Count, maxCacheSize); | |
events = events.OrderByDescending(e => e.CreatedAt).Take(maxCacheSize).ToList(); | |
if (events.Count > MaxCacheSize) | |
{ | |
_logger.LogInformation("🧹 8000 Trimming cache for subscription {SubscriptionKey} from {OldCount} to {NewCount} events", | |
subscriptionKey, events.Count, MaxCacheSize); | |
events = events.OrderByDescending(e => e.CreatedAt).Take(MaxCacheSize).ToList(); |
Copilot uses AI. Check for mistakes.
_logger = logger; | ||
} | ||
|
||
/// <summary> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't add comments, code should be readable without comments, please remove the comments in this class
@@ -96,65 +104,130 @@ public Task LookupSignaturesDirectMessagesForPubKeyAsync(string nostrPubKey, Dat | |||
|
|||
var subscriptionKey = nostrPubKey + "DM"; | |||
|
|||
// Process cached events first | |||
var cachedEvents = _nostrEventCacheService.GetCachedEvents(subscriptionKey); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wonder what efficiency gains do we get here? we anyway create a subscription and call the relay.
Events will be fired twice once from cache and then again from each relay.
please fix the conflicts |
Added methods in relay service and client storage to fetch and store the nostr dms in local storage using blazored storage, and changed the dm lookups to fetch old from local storage, and fetch new ones using since parameter