foryougugl.blogg.se

Ejabberd sucks
Ejabberd sucks





  1. EJABBERD SUCKS HOW TO
  2. EJABBERD SUCKS CODE
  3. EJABBERD SUCKS FREE

EJABBERD SUCKS FREE

For me this is another beautiful example of truly working open standards and free software.

ejabberd sucks

Now eight months later, I received a mail from another student basically asking me the same question! I’m blown away by how fast one can go from the one asking to the one getting asked.

EJABBERD SUCKS HOW TO

Back then I wrote a mail to Daniel Gultsch, asking if he could give me some advice on how to start working on an OMEMO implementation. Last but not least, I once again got excited about the XMPP communityĪs some of you might now, I started to dig into XMPP roughly 8 months ago as part of my bachelors thesis about OMEMO encryption. The next steps will be working NIO into Smack. I can handle around 10000 simultaneous connections using a single thread. The goal of NIO is to use few Threads to handle all the connections at once. The client sends a string to the server which than 1337ifies the string and sends it back. My implementation consists of a server and a client.

EJABBERD SUCKS CODE

I put the word small in quotation marks, because NIO blows up the code by a factor of at least 5. Speaking of tests – I created a “small” test app that utilizes NIO for non-blocking IO operations. I fixed the issue by replacing the ArrayList with a LinkedHashSet. This failed under certain circumstances (in my universities network) due to the properties of the set. Basically there were simultaneous insertions into an ArrayList and a HashSet with a subsequent comparison. While testing, I found a small bug in the SOCKS5 Proxy tests of Smack. I hope to keep the code coverage of Smack at least on a constant level – already it dropped a little bit with my commits getting merged, but I’m working on correcting that again. Tests are tedious, but the results are worth the effort. Other than that integration test, I also worked on creating more junit tests for my Jingle classes and found some more bugs that way. Previously I had the issue, that the transfer was started as a new thread, which was then out of scope, so that the test had no way to tell if and when the transfer succeeded. This allowed me to bring the first integration test for basic Jingle File transfer to life. Most functionality is yet to be implemented, but at least getting notified when the transfer has ended already works. Most notably I created a class FileTransferHandler, which the client can use to control the file transfer and get some information on its status etc. I than took some inspiration from the SI code to improve my Jingle implementation. I read the existing StreamInitialization code and found some typos which I fixed. This week I made some more progress working on the file transfer code. If none of my other questions get answered, I'd be most appreciative if you could give me some guidance on that.Time is flying by. The only thing I'm significantly discomfited by is that 10MB per client memory requirement. On the whole, the experience has been terrific. Do I just rely on the client backoff to make sure everyone doesn't log in at once, or is there an accept queue configuration I should be setting? However, login time efforts were more than capable of bringing the server to its knees if we didn't throttle them a bit. We tested to 8000 concurrent messages per second before the 2 2cpu servers in the test cluster reached capacity. I tried restarting mnesia, and in every case ejabberd halted as mnesia started. Split Brain recovery requires restarts of the minority nodes, it seems. Should I chalk this up to the cost of concurrent performance? This is about an order of magnitude larger than our old chat system, Openfire. memory usage per connected user was around 10MB. Is there another solution I should be staring at other than an actual load balancer? DNS round robin sucks at keeping things balanced. We experienced the following challenges that we're trying to figure out if we need to Just Live With: Lastly, we're using the clustering to do geodistributed chat, using DNS A records and sortlisting to make sure you go to your local site.

ejabberd sucks

We're testing with 4000 users total (only about 1500 active), and the roster is an absurd 12000 users long ( each user in 3 groups).

ejabberd sucks

We're using the LDAP shared roster, the patched version for AD. I'm trying to future-proof the environment a bit, so we're testing a much larger environment than we currently have. I've spent time the last few weeks working with a dev to stress test our proposed installation of EJabberd.







Ejabberd sucks