Expand 500 entry limit
Please expand 500 entry limit to 3000 (or infinity) for each feed.
-
ali commented
Vladimir, thanks for your comment. Yes, I believe it would be a great solution compromise. Increasing the limit to a maximum of 3-5 personal feeds would solve most people's issue in my opinion.
My use case: some of my RSS feeds are meant to be read from the beginning like stories. But most of my RSS feeds, are news-like feed where it only makes sense to read the last few articles
Another possible solution I thought of: if the user manually scrolls, fetch additional articles until they stop scrolling and store those old articles for a maximum of 10 days. This however, would only be possible if the RSS feed link has all the articles embedded in.
Let me know what you think :)
-
An SSD/HDD mix will take a lot of time to implement.
It might be possible to increase the limit for a few selected feeds, but any kind of unlimited feeds is both difficult to implement and can take up too much disk space.
Unfortunately, nothing has changed here.
-
ali commented
Hi Vladimir, any way to change this? Maybe we can come to a compromise?
- By default, expand to a maximum of 500 items, but if the user manually scrolls to view older RSS feed items, fetch them and store them on HDD (since they are old). This will make it so most feeds will have 500 or less, while the feed that you want to scroll back in can have more than 500.
- Or what if you allowed users to select 5-10 feeds where they will have no limits?
I do agree with you that some feeds will generate a lot of data, but maybe we can come to a compromise. Some feeds are meant to be read from the beginning whereas others you only care about the most recent items.
-
I don't think so. Google Reader used BigTable or some other equally big Google-scale database. So no issues in saving a lot of data. It was just not growing fast enough and projects with <100M users are not that interesting for Google.
-
Sergey Redin commented
Sorry for a useless comment, do you think google reader could still exist if they used this limit?
-
Nothing has changed since my last comment. There are no plans to increase 500 items limit at the moment.
-
GlacJAY commented
How about now? 😄️
-
Anonymous commented
I think it's really important to expand the entry limit!! I would willing to pay for this.
-
There is a big problem with this feature. There are many high volume feeds which usually post things no one ever read (or read partially) but they account for about 80% of all posts.
Increasing limit from 500 to 1000 means doubling database and search index size. Increasing to 3000 means about 5-6x more disk space and more RAM (search uses RAM quite heavy). It means major increases in server costs just because of some feeds that are mostly unread.
It also hurts performance, since posts lists become larger and more memory and internal network bandwidth is needed. I.e. again more hardware is needed to maintain the same performance.
While it probably possible to move old posts from SSD to HDD, I'm not yet sure that search index won't require too much RAM. It also requires considerable engineering effort to make this separation and I have a lot of other features to do.
The only possible way I see at the moment is to increase limit to few thousands for additional price. But I don't think I have time to implement it in the near future.
-
roman commented
This feature is really fucking crucial. Ready to pay for this option. One of the killer features of GR - just scroll a mouse wheel and move to the birth of a feed no matter how far in the past it is.
-
Suresh Iyer commented
I would second the option of having access to older items at a sub-par search performance. Maybe you can even add a flag "search archive" so that the user knows that it is going to be relatively very slow. I think most users wouldn't mind the trade-off with speed for the ability to have access to old stuff. One of the reasons GReader was great was that it was my personal slice of web and many times I would search in GReader with the keywords I had in mind before attempting a G search.
PS: This is my first post in uservoice. Thank you VS for all the great work with this nifty reader! I hope you would reconsider your decision regarding this one.
-
Chris commented
Maybe you can put those older posts into a different partition of the database server as these are not requested often, i.e. an archive functionality.
To me it happened a lot with Google Reader to find an older blog posting, subscribing to the feed and starring that post to have it bookmarked.
Throwing non-bookmarked posts is a bit suboptimal to me. Maybe there are alternatives to deleting them.
I've chosen to use a one-time-mail account to write this as I do not want my Google profile information (such as name) to be published publicly.
-
Vegard Langås commented
This is really important. I would not mind having to wait some seconds for bazqux to be able to fetch me some old posts. But loosing them after 500 is just harsh and stupid. 500 posts are not much, for gizmodo its about/less than 2 weeks. Compared to google reader this is less than half of what it would be able to track of as unread.
One of the best and most plesant surprises of Google Reader was when you was late to discover a comic strip like xkcd you could enter the timemachine in the fast (as it preloaded the next one as you were reading) ui we used to love to catch up on the more than thousand strips.
Google Reader would even be able to resurrect dead websites and restore lost data when websites burned to the ground.
None is expected to have crawled Google Readers database (but shomeone should have). But from the day someone subscribes to a feed (and as long as someone is subscribing to it), that proves that the information should be kept.
I am not saying you need to build a google sized datacenter and start storing every single jurnalists word forever. But please, reach for the stars and keep on expanding the capeabilety for bazqux to be a timemachine.
I suggest increasing the limits to minimum ((1000 items) or (30 days)) initially. This would mimic Google Readers abilety to keep track of at least 1 month of unread items.
Don't forget that many users are completist and use their rss reader to get everything.
-
Suoshi Etchi commented
В google reader'е (если я не напутал ничего) видны посты только за последний месяц - сейчас, например, доступны посты с 1 июня по 1 июля. При этом Just Reader накопил мне более 2,5 тысяч статей этого же фида (у меня долго не доходили руки почитать) - т.е. старые посты google reader'а не удалились из программы. И это замечательно.
Так что у меня всё же остаётся надежда, что в JustReader'е самопроизвольно не будут удаляться посты bazqux (при превышении количества 500). А если будут удаляться - буду слёзно просить разработчика программы (и разработчиков сервиса bazqux видимо - не знаю в чьём ведомстве этот функционал) сделать аналогично google reader'у.
PS: Спасибо за столь быстрые и детальные ответы.
-
По идее, в фиде не должен показывать 500+, только 500. Это в папке может быть 500+.
Если ставите Sort by oldest, то будет показываться начиная 500-го поста. 600 постов в фиде быть не может.
В JustReader-е при синхронизации все посты в фиде, что больше 500 будут удаляться.
-
Suoshi Etchi commented
Ой, тут можно на русском оказывается.. )
Поясните, пожалуйста, следующую ситуацию. Если у меня в фиде набралось, скажем, 600 непрочитанных записей: Я открываю этот фид через веб-интерфейс BazQux.com (счётчик показывает 500+ записей) и сортирую список опцией "Sort by Oldest". В этой ситуации в верхней строчке данного фида будет 100-тая запись, а не первая, так? То есть я потеряю 99 записей в этом случае?
А если я буду постоянно синхронизировать (не читая) этот фид в моём любимом JustReader'е (встроенной опцией JustReader'а закачиваю весь фид на телефон для последующего чтения offline), в таком случае мне не должно всю малину испортить ограничение BazQux в 500 записей на фид или при очередной синхронизации все записи старше 500-той исчезнут и из JustReader'а?Извините если этот вопрос надо было задавать в блог JustReader, а не в этот... Я просто пытаюсь разобраться в ситуации и не нащупал пока что грань между приложением и сервисом.
-
This limit is needed for performance reasons. I'm not planning to increase it in the near future.