Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'messages'}) and 7 missing columns ({'discussion', 'image_url', 'views', 'fancy_title', 'tags', 'title', 'created_at'}). This happened while the json dataset builder was generating data using hf://datasets/wangzhang/mongoDB_community_hot/mongoDB_data_hot.jsonl (at revision af70a7e8c88f0324d190036b17b260a420dceeec) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast messages: list<item: struct<role: string, content: string>> child 0, item: struct<role: string, content: string> child 0, role: string child 1, content: string to {'discussion': [{'code': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'username': Value(dtype='string', id=None)}], 'image_url': Value(dtype='string', id=None), 'views': Value(dtype='int64', id=None), 'fancy_title': Value(dtype='string', id=None), 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'title': Value(dtype='string', id=None), 'created_at': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'messages'}) and 7 missing columns ({'discussion', 'image_url', 'views', 'fancy_title', 'tags', 'title', 'created_at'}). This happened while the json dataset builder was generating data using hf://datasets/wangzhang/mongoDB_community_hot/mongoDB_data_hot.jsonl (at revision af70a7e8c88f0324d190036b17b260a420dceeec) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
created_at
string | image_url
null | views
int64 | title
string | fancy_title
string | discussion
list | tags
sequence |
---|---|---|---|---|---|---|
2022-02-06T16:10:18.497Z | null | 24,789 | Aggregate $match _id $eq $toObjectId not working | Aggregate $match _id $eq $toObjectId not working | [
{
"code": "db.getCollection('dlsComponents').aggregate([\n { $match: { library: 'Library1', collection: 'Collection1', media: 'Images', object: 'Image3' } }\n ])\n{ _id: ObjectId(\"61fc458b46d7874a3a97ef79\"),\n library: 'Library1',\n collection: 'Collection1',\n media: 'Images',\n object: 'Image3',\n info: 'Image: 1/1/Images/Image3 Info', …\ndb.getCollection('dlsComponents').aggregate([\n { $match: { _id: { $eq: { $toObjectId: \"61fc458b46d7874a3a97ef79\" } } } }\n ])\n",
"text": "Using Compass 1.30.1, I was testing an aggregation and getting unexpected results. A $match was not working as expected. The simplified aggregation is:And this gives the expected result by finding a document:try to get the same document by _id:does not find a document. Why does the second $match not find a document?",
"username": "David_Camps"
},
{
"code": "",
"text": "I found that:{ $match: { $expr: { $eq: [ ‘$_id’, ‘$$imageId’ ] } } }does work ($$imageId is an ObjectId used in the non-simplified aggregate). Maybe the { $eq: ‘$value’ } format does not work in pipelines.",
"username": "David_Camps"
},
{
"code": "$eq$eq: ObjectId(\"...\")",
"text": "Hi David,The $eq used in find()/$match (without $expr) must specify an exact value: https://docs.mongodb.com/manual/reference/operator/query/eq/You can use $eq: ObjectId(\"...\")Jess",
"username": "jbalint"
},
{
"code": "{ $match: { $expr : { $eq: [ '$_id' , { $toObjectId: \"61fc458b46d7874a3a97ef79\" } ] } } }\n",
"text": "Your last post made me think that may be $toObjectId works only inside $expr. I triedand it works.",
"username": "steevej"
},
{
"code": "",
"text": "See on a related topicand",
"username": "steevej"
},
{
"code": "",
"text": "even gpt4 doesn’t solve my problem, thank",
"username": "Crown_International_Technology_Pvt_Ltd_CIT"
},
{
"code": "",
"text": "Did you told GPT4 what was your problem? Or did you do like you did here., just saying that you have a problem.",
"username": "steevej"
}
] | [
"aggregation",
"compass"
] |
2022-10-06T08:01:53.956Z | null | 5,791 | Problem installing MongoDB 6.0 on Amazon Linux 2 | Problem installing MongoDB 6.0 on Amazon Linux 2 | [
{
"code": "",
"text": "I’m also having the same issue as Install mongodb-org 5.0 on Amazon Linux 2 aarch64 architecture. how to resolve",
"username": "Simeon_Palla"
},
{
"code": "/proc/cpuinfox86_64aarch64",
"text": "Welcome to the MongoDB Community @Simeon_Palla!Please provide more details on the issue you are encountering:Aside from the typo in the original post, the repo format seems to be OK. I would follow the general tutorial to Install MongoDB Community Edition on Amazon Linux and replace x86_64 with aarch64.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I’m trying to install MongoDB on the AWS Linux ec2 server, for my node.js app backend. but when I’m trying to install MongoDB I’m getting these errors$ sudo yum install -y mongodb-org\nLoaded plugins: extras_suggestions, langpacks, priorities, update-motd\nNo package mongodb-org available.\nError: Nothing to do\nerror1918×636 43.2 KB\n",
"username": "Simeon_Palla"
},
{
"code": "",
"text": "already tried this Install MongoDB Community Edition on Amazon Linux but not working",
"username": "Simeon_Palla"
},
{
"code": "/etc/yum.repos.d/mongodb-org-6.0.repoyum installyum repolist",
"text": "Hi @Simeon_Palla,Did you create the /etc/yum.repos.d/mongodb-org-6.0.repo file before running yum install ?What is the output of yum repolist?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "x86_64",
"text": "Hi @Simeon_Palla ,Please also confirm the hardware architecture your EC2 instance is using (x86_64, Graviton, etc).Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "sudo yum install -y mongodb-org",
"text": "Hello @Stennie_X ! so I am having the same issue hereI have created the file as requested and still sudo yum install -y mongodb-org returns “No package mongodb-org available”I have tried to install it on similar machine and it worked just finehardware architecture: aarch64",
"username": "Ella_Mozes"
},
{
"code": "",
"text": "I’m having the same issue with amazon linux:Checking the repolist it seems that MongoDB 6 has much less entries:amzn-updates/latest amzn-updates-Base 7,548\nmongodb-org-3.4 MongoDB Repository 150\nmongodb-org-3.6 MongoDB Repository 144\nmongodb-org-4.0 MongoDB Repository 170\nmongodb-org-4.2 MongoDB Repository 120\nmongodb-org-4.4 MongoDB Repository 196\nmongodb-org-5.0 MongoDB Repository 177\nmongodb-org-6.0 MongoDB Repository 39",
"username": "Nicolas_Dickreuter"
},
{
"code": "",
"text": "Hi Stennie,\nI am trying to install Mongodb 6.0 in Amazon linux (https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-amazon/). But getting error as:package mongodb-org-6.0.6-1.amzn2.x86_64 requires mongodb-org-database, but none of the providers can be installed[root@ip-172-31-37-199 ~]# yum repolist\nrepo id repo name\namazonlinux Amazon Linux 2023 repository\nkernel-livepatch Amazon Linux 2023 Kernel Livepatch repository\nmongodb-org-6.0 MongoDB Repository[root@ip-172-31-37-199 ~]# aws --version\naws-cli/2.9.19 Python/3.9.16 Linux/6.1.29-47.49.amzn2023.x86_64 source/x86_64.amzn.2023 prompt/offPlease help",
"username": "Shivangi_Agarwal"
},
{
"code": "",
"text": "Hi Shivangi,\nDid you get any solution for your issue ? I’m hvaing a similar issue.\nError:\nProblem: conflicting requests[root@ip--21-10- ~]# aws --version\naws-cli/2.9.19 Python/3.9.16 Linux/6.1.34-59.116.amzn2023.x86_64 source/x86_64.amzn.2023 prompt/off",
"username": "vijay_shankar_Singh"
},
{
"code": "",
"text": "Hello all! I was having this issue today following the Install MongoDB Community Edition on Amazon Linux tutorial.I switched from the Amazon Linux 2 tab over to the Amazon Linux 2022 tab,\nresulting in a different base url in the yum repo file. This seemed to do the trick! Install complete.",
"username": "armslice_N_A"
}
] | [] |
2023-03-28T19:55:40.589Z | null | 8,763 | Getting this error - MongoNotConnectedError: Client must be connected before running operations | Getting this error - MongoNotConnectedError: Client must be connected before running operations | [
{
"code": "// Code to require the parts needed for seedsindex to work correctly\nconst mongoose = require('mongoose');\nconst MusicProduct = require('../database_models/musicproduct');\nconst BookProduct = require('../database_models/bookproduct');\n\nconst musicAlbums = require('./musicseeds');\nconst bookNovels = require('./bookseeds');\n\n// Connnect to MongoDB\nmongoose.connect('mongodb://127.0.0.1/music-bookApp');\nmongoose.set('strictQuery', false);\n\n// Logic to check that the database is connected properly\nmongoose.connection.on('error', console.error.bind(console, 'connection error:'));\nmongoose.connection.once('open', () => {\n console.log('Database connected');\n});\n\n//Fill the Music products database with 20 random albums taken from the music seeds file\nconst musicSeedDB = async () => {\n await MusicProduct.deleteMany({});\n for (let i = 0; i < 20; i++) {\n const randomMusic20 = Math.floor(Math.random() * 20);\n //const musicStock = Math.floor(Math.random() * 10) + 1;\n const musicItem = new MusicProduct({\n artistName: musicAlbums[randomMusic20].artist,\n albumName: musicAlbums[randomMusic20].title,\n //musicStock\n })\n await musicItem.save();\n }\n};\n\n//Fill the Book products database with 20 random books taken from the music seeds file\nconst bookSeedDB = async () => {\n await BookProduct.deleteMany({});\n for (let i = 0; i < 20; i++) {\n const randomBook20 = Math.floor(Math.random() * 20);\n //const bookStock = Math.floor(Math.random() * 10) + 1;\n const bookItem = new BookProduct({\n bookAuthor: bookNovels[randomBook20].authors,\n bookName: bookNovels[randomBook20].title,\n //ookStock\n })\n await bookItem.save();\n }\n};\n\n// Close the connection to DB after finish seeding\nmusicSeedDB().then(() => {\n mongoose.connection.close();\n});\n\nbookSeedDB().then(() => {\n mongoose.connection.close();\n});\n",
"text": "Hi All,I have recently started on a project at my University, and part of this project is including a seeds file to seed a DB with test information. Previously, this has worked fine but now I am getting the following error messages every time I run the seeds file in node.js:Database connected\nD:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongodb\\lib\\operations\\execute_operation.js:24\nthrow new error_1.MongoNotConnectedError(‘Client must be connected before running operations’);\n^MongoNotConnectedError: Client must be connected before running operations\nat executeOperationAsync (D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongodb\\lib\\operations\\execute_operation.js:24:19)\nat D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongodb\\lib\\operations\\execute_operation.js:12:45\nat maybeCallback (D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongodb\\lib\\utils.js:338:21)\nat executeOperation (D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongodb\\lib\\operations\\execute_operation.js:12:38)\nat Collection.insertOne (D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongodb\\lib\\collection.js:148:57)\nat NativeCollection. [as insertOne] (D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:226:33)\nat Model.$__handleSave (D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongoose\\lib\\model.js:309:33)\nat Model.$__save (D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\mongoose\\lib\\model.js:388:8)\nat D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\kareem\\index.js:387:18\nat D:\\OUWork\\Year 6\\TM470\\Project\\node_modules\\kareem\\index.js:113:15 {\n[Symbol(errorLabels)]: Set(0) {}\n}Node.js v18.12.1For reference (if it helps), here is the seeds file I have created and run:To be fair, the seeds file still seems to run as the database does update with the seeded information, but I would much rather get to the bottom of the error so I can stop it appearing.Thank you for your help in advance ",
"username": "gary_easton"
},
{
"code": "#!/usr/bin/env node\nimport { MongoClient } from 'mongodb';\nimport { spawn } from 'child_process';\nimport fs from 'fs';\n\nconst DB_URI = 'mongodb://0.0.0.0:27017';\nconst DB_NAME = 'DB name goes here';\nconst OUTPUT_DIR = 'directory output goes here';\nconst client = new MongoClient(DB_URI);\n\nasync function run() {\n try {\n await client.connect();\n const db = client.db(DB_NAME);\n const collections = await db.collections();\n\n if (!fs.existsSync(OUTPUT_DIR)) {\n fs.mkdirSync(OUTPUT_DIR);\n }\n\n collections.forEach(async (c) => {\n const name = c.collectionName;\n await spawn('mongoexport', [\n '--db',\n DB_NAME,\n '--collection',\n name,\n '--jsonArray',\n '--pretty',\n `--out=./${OUTPUT_DIR}/${name}.json`,\n ]);\n });\n } finally {\n await client.close();\n console.log(`DB Data for ${DB_NAME} has been written to ./${OUTPUT_DIR}/`);\n }\n}\nrun().catch(console.dir);\nconst mongoose = require('Mongoose');\nmongoose.connect(\"MongoDB://localhost:<PortNumberHereDoubleCheckPort>/<DatabaseName>\", {useNewUrlParser: true});\nconst <nameOfDbschemahere> = new mongoose.schema({\n name: String,\n rating: String,\n quantity: Number,\n someothervalue: String,\n somevalue2: String,\n});\n\nconst Fruit<Assuming as you call it FruitsDB> = mongoose.model(\"nameOfCollection\" , <nameOfSchemeHere>);\n\nconst fruit = new Fruit<Because FruitsDB calling documents Fruit for this>({\n name: \"Watermelon\",\n rating: 10,\n quantity: 50,\n someothervalue: \"Pirates love them\",\n somevalue2: \"They are big\",\n});\nfruit.save();\n",
"text": "Take a look at these two example scripts, first is Node.JS, second is Mongoose.The points I want to drive home with the first, is how the connections to the DB are being established and verified before the rest of the operations. And comparatively to how similar connections work with Mongoose, as you can choose to use Mongoose for redundancy to ensure the client connection if you’d like.Mongoose:Mongoose Script",
"username": "Brock"
},
{
"code": "",
"text": "Could it be because you wroteawait client.close();",
"username": "anont_mon"
},
{
"code": "",
"text": "error === {message : “Client must be connected before running operations”}\ni am facing this type of error so many times i worked it but i couldn’t fix that bug",
"username": "Madhesh_Siva"
},
{
"code": "",
"text": "Yes, you are absolutely right…",
"username": "Zahidul_Islam_Sagor"
}
] | [
"node-js",
"mongoose-odm"
] |
2023-01-20T23:24:01.307Z | null | 5,937 | MongoNetworkError: connection 1 to *IP*:27017 closed | MongoNetworkError: connection 1 to *IP*:27017 closed | [
{
"code": " MongoNetworkError: connection 1 to *IP*:27017 closed\n at Connection.onClose (.../node_modules/mongodb/lib/cmap/connection.js:134:19)\n at TLSSocket.<anonymous> (.../node_modules/mongodb/lib/cmap/connection.js:62:46)\n at TLSSocket.emit (node:events:513:28)\n at TLSSocket.emit (node:domain:489:12)\n at node:net:301:12\n at TCP.done (node:_tls_wrap:588:7)\nrequest.context.callbackWaitsForEmptyEventLoop = false;\n",
"text": "Hello all, we are using MongoDB Serverless and connecting to it from AWS Lambda. I realized that after a while (a few minutes) of idle, subsequent database queries returns this error:The next few requests will continue to fail as each connection within the pool fails and reconnects. At one point the failures will subside, until I leave it idle for X minutes, and the problem will surface again.For each of my Lambda function, I have this set:with the mongodb client instance outside of the handler function, as suggested by the mongodb for lambda guide.I can also confirm it is not a network access issue as it currently has 0.0.0.0 allowed and it works consistently at fresh start.Any suggestions or help will be much appreciated! Thanks",
"username": "Danny_Yang"
},
{
"code": "",
"text": "@Danny_Yang it might have something to do with whitelisted ip addresses…",
"username": "Occian_Diaali"
},
{
"code": "",
"text": "@Danny_Yang – Did you find a solution? Running into the same problem in Vercel API routes.",
"username": "Divyahans_Gupta"
},
{
"code": "",
"text": "Please did you get a solution to this. I’m using nextjs and running into the same error",
"username": "Efosa_Uyi-Idahor"
},
{
"code": "",
"text": "Maybe it’s because your current ip address and ip address in mongodb atlas are not the same.\nyou need to add your current address to network access in mongodb atlas project that are you working on or add 0.0.0.0/0 to ip address list and this will make any Someone who can log in has a connection to the database (I recommend the first method)",
"username": "Panda_Music"
},
{
"code": "",
"text": "the second option is really an easy alternative to do with. can you specifying the first option, please? Like we need to adding extra option script inside our code or something.\nbecause it will throw some error when you just adding a character. you need to add your currenet IP address over and over after you make changes.",
"username": "Orcastra"
},
{
"code": "",
"text": "Through Network access option , Whitelist your current IP address or simply allow access from everywhere, i.e. 0.0.0.0/0 (includes your current IP address), also ensure that you have good internet connection.\nThis worked in my case. Hoping same for others.",
"username": "Utsav_raj"
},
{
"code": "",
"text": "I have whitelisted my ip, but I still have this issue. Does anyone have another solution?",
"username": "Emmanuel_Davis"
},
{
"code": "",
"text": "Currently getting the same error inside AWS using a lambda connecting to a mongodb instance.\nWhat is different is that the connection does work for most of the collections in the database. It is only one collection that while not large in terms of mongo collections is the larger than our other collections in our instance.Very weird that it would be happening with just one collection.edit: this is related to the size of the collection being read. In testing, limiting the size of the collection to 50 items does get past the error, but not useful for a production solution.",
"username": "Brian_Forester1"
}
] | [
"node-js",
"serverless"
] |
2022-09-16T07:09:08.408Z | null | 6,639 | WARNING! -- Mongo DB Serverless Pricing | WARNING! – Mongo DB Serverless Pricing | [
{
"code": "",
"text": "Hi Folks,Just a quick word of warning… if you’ve got a Mongo DB serverless database running you better keep an eye on the bill. I’ve just been stung for $155 for 2 1/2 days. To say I’m shocked is an understatement .The pricing structure is really not user friendly and TBH I feel a bit scammed. Mongo Team - I feel like there should be some kind of warning during the setup about the possibility of this, or maybe force the user to set a price cap per month. Perhaps example pricing would also help…FYI - I only uploaded about 4.5 million docs (1.5 gb) and did some ‘manual testing’ which involved searching for records and displaying the data on a webpage. I also had a server add new docs every 15 minutes (about 20-100 docs)… not exactly enterprise scale stuff.Hopefully nobody else get caught out like me Take it easy,Nathan.",
"username": "Nathan_Shields"
},
{
"code": "",
"text": "Hi Nathan,\nI work in the serverless PM team and I am sorry your experience has been less than perfect. As serverless is one of the latest offering from MongDB, we are always looking to improve the product. I will be reaching out to you to discuss your issues in more details.Vishal",
"username": "Vishal_Dhiman"
},
{
"code": "",
"text": "Hello Folks,So I setup a serverless Mongo DB database to test a small project I’m working on… it’s ended up costing me $190 for 3 days. Luckily I spotted the bill - God only knows what it could have been by the end of the month!So what did I do wrong?\nSimple - I chose ‘serverless instance’ instead of a fixed monthly contract. The serverless sounds cheap - just pay for what you use… however the pricing structure isn’t very user friendly and you can be caught out BIG TIME! How big? Well my database of 4.5million simple documents, with me as the only user, cost me ~$190 for 3 days!Please don’t be like me.Mongo team, you really need to work on the docs for that product. Give some examples of how the pricing works in ‘real life scenarios’. Searching online shows I’m not the first to be hit by this… I hope I’m the last!On the plus side the M5 server I’ve now got is very good… and not any slower! It will also take at least 6 months to run up the same bill… not 3 days!!Take care folks,Nathan.",
"username": "Nathan_Shields"
},
{
"code": "",
"text": "so, did you fix this? i’m about to use this",
"username": "AL_N_A"
},
{
"code": "",
"text": "Hi friends!\nBig warning! Mongodb serverless is a real scam. I was charged $312 for a 7 days for nothing. More than $40 per day!! We just uploaded a database of small online community application. 1.5 gbyte. Less than 200 active users a day. Their pricing claims that cost of read is $0.1 per million reads. This means my 200 users made 400million reads a day!!\nI tried to reach them through a support chat but every response from them takes hours. I requested some details. NO ANSWER.\nBottom line I was a big fun of mongodb and deployed a dozen of projects. I recommend my customers mongodb whenever it may fit requirements. It is very convenient database especially for startups,but they caught us this time \nThey always were more expensive then others, but this time they bet my imagination. I’m really disappointed and angry. I hope my lesson will be helpful for others. So I’m still waiting for a details. Will share with you in my blog.\nCheers!",
"username": "Pavel_Smirnov"
},
{
"code": "",
"text": "mongodb and deployed a dozen of projects. I recommend my customers mongodb whenever it may fit requirements. It is very convenient database especially for startups,but they caught us this timeWe have a very simlar problem. For a day when nobody used our platform, mongo says they received 0.1M read, which is impossible. With a database of 95MB and 200 users using our platfor for 2 hours, they say we did more than 200M read. I think they are wrong computing the reads (or this is a scam)",
"username": "Fabrizio_Ruggeri"
},
{
"code": "",
"text": "Thank you all for reporting this. I was just about to subscribe to serverless and feared that I was unable to calculate the cost for our application. A friend of mine warned me about their pricing and surprises that may come up. You confirmed this. I’ll stay miles away from Serverless for the moment.",
"username": "Louis"
},
{
"code": "",
"text": "My bill went from $20/month to over $1000 . I looked at the billing usage and it started to spike on the 04/08/23 to 500 million RPU (weird its a round number). keep in mind there were no development changes and no change in traffic. Support said they will get back to me. Had to shut down the project in the mean time until this gets figured out. Hopefully support will get back to me soon.",
"username": "Zeke_Schriefer"
},
{
"code": "",
"text": "We are currently giving it a try and i think it really is VERY expensive, definitely discussing migrating back in the next weeks.",
"username": "Milo_Tischler"
},
{
"code": "",
"text": "Hi,I am from the Atlas Serverless team and am very sorry for the experience you’ve had. Please see the “Serverless Pricing” section of this post for more information on how the bill is calculated along with this article on helpful tips to optimize workloads on your Serverless instance.We apologize for the experience you have had. Please let us know if you have any additional questions by Direct Messaging me or by contacting support by clicking on the icon on the bottom right of this page.Regards,\nAnurag",
"username": "Anurag_Kadasne"
},
{
"code": "",
"text": "I am seeing the same issue here, it’s been running for 5 days now and already $20? How do I see the reads? I am sure I barely have any reads, I use the DB once a day, around 7k records.",
"username": "Ed_Durguti"
},
{
"code": "",
"text": "If you go to the “View Monitoring” tab, you should be able to see a chart for “Read/Write Units”. I would also recommend you take a look at the article and post linked in my previous post to get a better understanding of how pricing for serverless works. Please feel free to direct message me if you have other questions.Regards,\nAnurag",
"username": "Anurag_Kadasne"
},
{
"code": "",
"text": "I created a serverless instance to test for my app! I was in the free tier and moved to serverless assuming my costs would be better matched for that case as I have spiking load. To my shock, I am seeing a 20 dollar daily bill, with RPUs spiking into millions for no obvious reasons. There seems to be some issues with the pricing here. This basically makes serverless not cost effective at all and should I just go to dedicated instance?",
"username": "Parikh_Jain"
},
{
"code": "",
"text": "Thanks, I read the doc and then adjusted accordingly, indexed fields has resulted in less RPUs",
"username": "Ed_Durguti"
},
{
"code": "",
"text": "Hi Parikh_JainPlease see the “Serverless Pricing” section of this post for more information on how the bill is calculated along with this article on helpful tips to optimize workloads on your Serverless instance.",
"username": "Anurag_Kadasne"
},
{
"code": "",
"text": "I want to point out that there is a bug if you are using MongoDB Compass that can lead to indexes not being used.If you create an index in MongoDB Compass, it won’t be used by the Node.js driver (and possibly others, though I can’t speak to that personally).MongoDB Compass will be able to use the index just fine, making you think it’s working, when it’s not. People should be aware this is a possible cause of their bill being high.I explain more in the post below:",
"username": "Justin_Jaeger"
},
{
"code": "",
"text": "I had the same issue, $122 in 2 days. I had a few megabytes of data on my instance, I won’t ever use this product again. The app was not live, it had 0 users. Just me playing around with my API\nScreenshot 2023-09-06 at 5.23.51 PM2282×904 266 KB\n",
"username": "Yusuf_Bagha"
},
{
"code": "",
"text": "Hi YusufI am from the Serverless team and am terribly sorry about the experience you’ve had. Based on your screenshot, it seems like there were a lot of unindexed queries being run. I have sent you a direct message to better understand your use case. I would also suggest checking out the links posted in my responses above. Looking forward to corresponding over direct message.",
"username": "Anurag_Kadasne"
},
{
"code": "",
"text": "It’s worth mentioning again, after reading the docs my bill is significantly lower, although this is just a learning app for me, so no real customers/data.",
"username": "Ed_Durguti"
},
{
"code": "",
"text": "Also got hit for $180 in 7 days. We have 0 users besides 2 developers testing our website. We do have a socket API writing to database constantly but the bandwidth for that is minimal. Pretty ridiculous. Definitely feels like a scam.",
"username": "Sesan_Chang"
}
] | [
"serverless"
] |
2022-06-11T12:40:21.913Z | null | 14,092 | Unable to look up TXT record for host ****.****.mongodb.net from GCP ap | Unable to look up TXT record for host ****.****.mongodb.net from GCP ap | [
{
"code": "",
"text": "I have a single node GCP cluster with my spring-boot app running. I recently created a new DB [M10] cloud atlas and trying to connect it via my app. However, I keep getting the following error - screenshot attached\n\nScreenshot 2022-06-11 at 14.27.531920×961 289 KB\nHowever, I am able to connect to my old DBs [M20 and M10] via the same GCP cluster and same spring-boot app.I am unable to figure out what could be the reason? Any idea?Thanks in advance\nPranav",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "Is there a final “Caused by” that’s cut off from that screen shot? That could help diagnose the issue. It’s possible that you’re running into https://jira.mongodb.org/browse/JAVA-4018, in which case updating the Java driver to 4.5.1 or later will fix it.Is this a self-managed cluster, or are using Atlas, or some other service? I am pretty sure that Atlas always registers a TXT record in DNS, but other services may not.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "final caused by :\n\nScreenshot 2022-06-11 at 22.39.332870×1014 414 KB\nI am using Atlas and spring-boot-starter-data-mongodb 4.6.0the other 2 DBs are also from Atlas and I am able to connect to them",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "from the bug you referenced it looks like it may have re-appeared in 4.6.0.",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "4.6.0sorry i am using spring-boot-starter-data-mongodb 4.7.0 which internally uses 4.6.0 mongo java driver",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "You can see from the stack trace that it’s using mongodb-driver-core-4.4.0.jar.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Thanks for spotting that. But after upgrading I am still getting DNS issues\nScreenshot 2022-06-12 at 16.26.112672×864 316 KB\n",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "It is not the same address as before.",
"username": "steevej"
},
{
"code": "",
"text": "yeah… i deleted the previous DB and created new one",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "any idea what might be wrong? I am using the same old username/pass combination that I have for the other 2 DB [no special character] . didn’t find anything relevant online",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "There’s something wrong with the new Atlas DB instances. There seems to be an issue only while connecting with the new DB. The old DB connection are working perfectly fine",
"username": "Pranav_Jariwala"
},
{
"code": "~$ dig SRV _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n\n; <<>> DiG 9.10.6 <<>> SRV _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19870\n;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 512\n;; QUESTION SECTION:\n;_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. IN SRV\n\n;; ANSWER SECTION:\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN SRV 0 0 27017 ac-gokumri-shard-00-00.h8dkbpg.mongodb.net.\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN SRV 0 0 27017 ac-gokumri-shard-00-01.h8dkbpg.mongodb.net.\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN SRV 0 0 27017 ac-gokumri-shard-00-02.h8dkbpg.mongodb.net.\n\n;; Query time: 167 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Sun Jun 12 12:08:08 EDT 2022\n;; MSG SIZE rcvd: 256\n\n~$ dig TXT _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n\n; <<>> DiG 9.10.6 <<>> TXT _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7702\n;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 512\n;; QUESTION SECTION:\n;_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. IN TXT\n\n;; ANSWER SECTION:\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-tbk5qg-shard-0\"\n\n;; Query time: 35 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Sun Jun 12 12:08:57 EDT 2022\n;; MSG SIZE rcvd: 131\n\n",
"text": "Notice that now the TXT record is no longer failing (that happens first) but SRV lookup is still failing. Using dig, I’m able to resolve both TXT and SRV records for this host:Can you try the same from the same server on which you’re running the Java application?",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "looks like dig worked.\n\nScreenshot 2022-06-12 at 18.45.311752×1494 139 KB\nthen maybe some issue with com.mongodb.internal.dns.DefaultDnsResolver ?",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "I think it is something in your environment. I am able to connect successfully to your cluster from both the mongo shell and a Java application.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "If the issue was with the environment wouldn’t there be an issue connecting to other Atlas DB? [as mentioned previously, I am able to connect to the other 2 DB via the same GCP server, same application]",
"username": "Pranav_Jariwala"
},
{
"code": "",
"text": "Not necessarily, but I don’t have a hypothesis that explains what both of us are seeing. I think at this point you should open a support ticket (https://www.mongodb.com/docs/manual/support/) to try to get to the root cause.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'appConfig' defined in URL [jar:file:/opt/app/server/app.jar!/BOOT-INF/classes!/com/devnotes/server/app/AppConfig.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.devnotes.server.app.AppConfig]: Constructor threw exception; nested exception is com.mongodb.MongoConfigurationException: Unable to look up TXT record for host dev.7f***.mongodb.net at org.springframework.beans.factory.support.ConstructorResolver.instantiate\n",
"text": "Having the same issue as Pranav. GCP is failing to connect, works on my local environment. GCP works when connecting to older instances of Atlas.",
"username": "Robert_Timm"
},
{
"code": "",
"text": "Nevermind, just started happening in my local environment as well.",
"username": "Robert_Timm"
},
{
"code": "cluster0.dnqegeu.mongodb.netcluster0.xfzokj5.mongodb.netmongodb+srv://mongodb-driver-core-4.8.2.jar",
"text": "I’m facing the same issue, but did not find any solution on the forum here which works.\nI can connect to a DB on cluster0.dnqegeu.mongodb.net or cluster0.xfzokj5.mongodb.net with the MongoDB Compass application,\nBUT my java spring application fails to connect to a URI with mongodb+srv://\nWhile the mongodb driver should use the SRV entries of the DNS, it looks for a TXT entry.\nThe library in use is mongodb-driver-core-4.8.2.jar\nAny suggestion please?",
"username": "Dominik_Knoll"
},
{
"code": ";QUESTION\ncluster0.dnqegeu.mongodb.net. IN ANY\n;ANSWER\ncluster0.dnqegeu.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-c64af7-shard-0\"\ncluster0.dnqegeu.mongodb.net. 60 IN SRV 0 0 27017 ac-eheduyb-shard-00-00.dnqegeu.mongodb.net.\ncluster0.dnqegeu.mongodb.net. 60 IN SRV 0 0 27017 ac-eheduyb-shard-00-01.dnqegeu.mongodb.net.\ncluster0.dnqegeu.mongodb.net. 60 IN SRV 0 0 27017 ac-eheduyb-shard-00-02.dnqegeu.mongodb.net.\n;QUESTION\ncluster0.xfzokj5.mongodb.net. IN ANY\n;ANSWER\ncluster0.xfzokj5.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-kymzsm-shard-0\"\ncluster0.xfzokj5.mongodb.net. 60 IN SRV 0 0 27017 ac-1b0f5hn-shard-00-00.xfzokj5.mongodb.net.\ncluster0.xfzokj5.mongodb.net. 60 IN SRV 0 0 27017 ac-1b0f5hn-shard-00-01.xfzokj5.mongodb.net.\ncluster0.xfzokj5.mongodb.net. 60 IN SRV 0 0 27017 ac-1b0f5hn-shard-00-02.xfzokj5.mongodb.net.\n",
"text": "The scheme mongodb+srv involves 2 types of DNS records. A TXT record which supplied connection string parameter and SRV records which provides a list of hosts to connect to. Looking for a TXT is thus normal behaviour.What is the error message?As for the 2 clusters you share I get:andSo both looks correct.",
"username": "steevej"
}
] | [
"java",
"atlas-cluster",
"spring-data-odm"
] |
2020-05-01T19:36:46.972Z | null | 9,213 | Data modeling for social media followers/following... bucket pattern? | Data modeling for social media followers/following… bucket pattern? | [
{
"code": "",
"text": "Hi, I’m working on implementing social functionality in an app where users can follow each other. I’m trying to come up with a good approach for storing these follower/following relationships between users.I was initially going to use two collections… Follower and Following where each document would contain a user id, and an array of users the person is following, or who are following them.My concern with this approach is that if follower count gets very large, we will hit the 16mb BSON limit. An alternative would be having one document per follower relationship, which introduces other performance issues.Is a bucket pattern the best solution for this… or is there a better approach? I could have buckets of 100 followers and then also use those buckets for paginating lists of followers to display.However my concern is that when deleting a follower… all buckets for a given user would have to be searched and then when the follower is removed, an old bucket might have a gap. This gap could then be filled by a new follower in the future, but this would mean it would be more difficult to display the followers in order of new to old without sorting on date followed, which I imagine could become a performance issue.I’m relatively new to MongoDB and learning. This great article by @Justin mentions that this bucket gap issue can be solved by expressive update instructions introduced in MongoDB 4.2, but I’m having trouble understanding how this would resolve the issue:A look at how to speed up the Bucket PatternAny suggestions for using MongoDB when dealing with potentially large and ever growing social data like this?",
"username": "UC_Audio"
},
{
"code": "",
"text": "Following up to see if anyone has any suggestions on this.To summarize the questions above, is there a recommended approach or pattern for handling large amounts of follower/following relationships between users in a social media context using MongoDB?",
"username": "UC_Audio"
},
{
"code": "",
"text": "I found the answer here referencing the socialite project:Social Data Reference Architecture - This Repository is NOT a supported MongoDB product - GitHub - mongodb-labs/socialite: Social Data Reference Architecture - This Repository is NOT a supported Mo...For anyone else who comes across this, it looks like the suggested approach is to store each follow relationships as one individual document per relationship. So, two collections as noted above, but no bucket pattern, just individual follower or following documents.",
"username": "UC_Audio"
},
{
"code": "",
"text": "If you ever considered setting the maximum number of followers and subscriptions for each user?",
"username": "Alex_Deer"
},
{
"code": "",
"text": "It can be a really good feature, and today many people are using social media accounts only to gain followers, so if you plan to make a social media application, it can be a good feature if you get a decent explanation. This feature may define your social media project from other platforms like TikTok or Instagram, where all content creators try getting followers with all possible tools, including promotional services like tikdroid.com.",
"username": "Alex_Deer"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Hamad_Ch"
}
] | [
"data-modeling"
] |
2022-03-02T13:23:42.061Z | null | 5,989 | pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed | pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed | [
{
"code": "",
"text": "pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed: my-clus1-shard-00-00-pri.xyz.mongodb.net:27017: [SSL: WRONG_VERSION_NUMBER]Flask==2.0.2\nFlask-PyMongo==2.2.0\nrequests==2.20.1\npymongo[tls,srv]==4.0.1Any idea why the wrong ssl is from the packages or something else",
"username": "Howard_Hill"
},
{
"code": "",
"text": "What operating system, Python version and MongoDB version, and are you trying to connect through a proxy?",
"username": "Bernie_Hackett"
},
{
"code": "",
"text": "Docker\nFROM python:3.6.8-alpine3.9WORKDIR /appCOPY requirements.txt .RUN pip install -r requirements.txtRUN pip install gunicornCOPY flaskr/bar.py .EXPOSE 5000CMD [“python”, “bar.py”]",
"username": "Howard_Hill"
},
{
"code": "",
"text": "What version of MongoDB and where is MongoDB running?",
"username": "Bernie_Hackett"
},
{
"code": "",
"text": "I found that the ssl issue is a red herring , i was trying to connect to a db in a different region and the error presented as such.Its very strange and dishearting none the less everything is working now",
"username": "Howard_Hill"
},
{
"code": "",
"text": "how did you fixed it?, i have the same error",
"username": "Maayan_Cohen1"
},
{
"code": "",
"text": "It may be due to the OpenSSL updates. If you are on ubuntu:22.04, upgrade to ubuntu:23.x. However what OS are you on right now and the mongodb version now?",
"username": "Sibidharan_Nandakumar"
},
{
"code": "",
"text": "Hi, check your network access configuration in Atlas in case you are connecting to it.In my case, I was connecting to an instance in Atlas. As long as I have added the new IP, I was using from my network, I could connect",
"username": "Stephen_Vincent_Strange"
},
{
"code": "",
"text": "YES!!! I encountered the same error and after 2 hours of troubleshooting this is what solved my issue. Just add your IP to the cluster",
"username": "Vedant_Chaskar"
}
] | [
"python",
"atlas-cluster"
] |
2022-05-05T11:06:00.266Z | null | 5,110 | Data API is slow | Data API is slow | [
{
"code": "",
"text": "I am trying to call data-api from postman, it’s taking nearly 3 seconds to give response. Is it the normal case? Because querying through connection URL is a lot more faster than this. Will data Api give the same performance as we get from connection URL or it’s slower than the other method.Following are the details of my cluster :\n\nScreenshot 2022-05-05 1603001618×458 42.9 KB\n\nWhatsApp Image 2022-05-05 at 4.12.29 PM1280×339 20.6 KB\n",
"username": "VIKASH_KUMAR_SHUKLA"
},
{
"code": "",
"text": "There is a lot more going on with Data API compared to a driver connection.So a lot more is going on.Data API is very convenient because it is a layer of abstraction where you do not have to manage an application server. But it comes at a performance cost. You have to be aware of that.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @VIKASH_KUMAR_SHUKLA welcome to the community!I would also like to point out that the Data API is still very much in Preview/Beta at this moment, so things might change for the better in the future.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Vikash - I’ll also jump in and add that we have some improvements planned that will make this much faster, including some optimizations internally and the ability to externally choose a better deployment model fit for your region and cloud provider.\nCurrently I can see you’ve deployed your cluster on GCP which may cause some latency since the API is default hosted on AWS which may lead to some cross-cloud latency.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thank you Sumedha, I got your point. Now the problem is that we have deployed our Node Js (Back End) + React Js (Front End) on Google Cloud Platform. Also, our mongoDB cluster is deployed on GCP to reduce any latency. My questions are :",
"username": "VIKASH_KUMAR_SHUKLA"
},
{
"code": "",
"text": "Hi, is the data api still in beta.Was hoping to be able to use this in production, we too are seeing 4-5 seconds delay (calling from a cloudflare worker)We may shift some of our data storage to cloudflare d1 or kv stores… really looking for data to be stored at the edge.",
"username": "Rishi_uttam"
}
] | [
"data-api"
] |
2021-10-20T13:28:35.345Z | null | 31,140 | (Unauthorized) not authorized on admin to execute command | (Unauthorized) not authorized on admin to execute command | [
{
"code": "",
"text": "I have created free cluster and db on Mongo Atlas but I try to connect my app by mongoose.connect(‘mongodb+srv://Ahmed:[email protected]/bktrans?retryWrites=true&w=majority’)And the below error appears when I run rpm start on my API:(Unauthorized)not authorized on admin to execute command {listIndexes: “consumers“, cursor:{ }, $clusterTime: { clusterTime: {1634734692 10}, signature: { hash: {203 165 207 4 203 170 4 127 37 213 33 4 100 167 170 44 201 49 111 36} }} } …",
"username": "Ahmed_Habib"
},
{
"code": "",
"text": "Hi Ahmed ,I got this error recently while connecting from a node app on Heroku to MongoDB Atlas - I had issue with connection string .I tried using below the below format - and it worked.mongodb://:@-cluster0-shard-00-00.scxjr.mongodb.net:27017,-cluster0-shard-00-01.scxjr.mongodb.net:27017,-cluster0-shard-00-02.scxjr.mongodb.net:27017/?ssl=true&replicaSet=atlas--shard-0&authSource=admin&retryWrites=true&w=majorityIt worked.I got this from MongoDB Atlas tab → Deployment side bar → Databases → on right Connect button inside . Connect to your application section → Select Driver - Node.js and Version 2.2.12 or later - below it will show the example connection string.Please let me know if this solved your issue",
"username": "Paulson_Edamana"
},
{
"code": "",
"text": "God bless you. *Tears of joy :_ ))",
"username": "Amir_Abbas_Mousavi"
},
{
"code": "",
"text": "hello, I am getting same kind of error like:\n(Unauthorized) not authorized on admin to execute command { create: “sample”, capped: true, size: 5242880, writeConcern: { w: “majority” }, lsid: { id: {4 [133 92 127 66 175 140 79 35 174 118 57 40 59 137 39 110]} }, $clusterTime: { clusterTime: {1678037479 5}, signature: { hash: {0 [235 83 26 247 241 208 26 217 203 154 205 184 108 184 228 33 93 229 204 81]}, keyId: 7153350679543676928.000000 } }, $db: “admin” }i tried multi[ple ways for creating collection in db. This error i got is when i got connected to mongodb compass and tried to build a connection in db.\nThere are 3 radio buttons available with that, but cant choose which one to choose.\nPlease help me out.",
"username": "Gauravi_Raut"
},
{
"code": "",
"text": "I think you are trying to create your collection in admin db\nConnect to test db and try\nAlso share your compass connect string after hiding sensitive information like password,cluster address",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "hi\nI use your method to get connect string. But what it gives me is mongodb://zmz2:[email protected]:27017",
"username": "mingzi_zhang"
}
] | [
"node-js"
] |
2021-10-26T06:27:29.355Z | null | 12,260 | Connect mongod exiting with code 1, can not see specific any error in log file | Connect mongod exiting with code 1, can not see specific any error in log file | [
{
"code": " MongoDB shell version v5.0.3\n connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\n connect@src/mongo/shell/mongo.js:372:17\n @(connect):2:6\n exception: connect failed\n exiting with code 1\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784916, \"ctx\":\"SignalHandler\",\"msg\":\"Reacquiring the ReplicationStateTransitionLock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784917, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to mark clean shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the TTL monitor\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684100, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down TTL collection monitor thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684101, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down TTL collection monitor thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.252+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1634722334:252129][142289:0x7f6df9745700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 18, snapshot max: 18 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7\"}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":23}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time data capture\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.279+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.279+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n",
"text": "Very sorry for my short question but I don’t know how to describe my problem. Here is info when I try to run mongo shell:And here are some logs:",
"username": "MAY_CHEAPER"
},
{
"code": "",
"text": "but when I run sudo mongod -f /etc/mongod.conf, I can connect to mongo shell",
"username": "MAY_CHEAPER"
},
{
"code": "sudo mongodsudomongodmongod",
"text": "Hi @MAY_CHEAPER,Were you able to find a solution to your issue? Your log snippet starts after shutdown has initiated, so I think the most interesting log lines are missing.If sudo mongod works fine, my first guess would be that there are problems with file & directory permissions that are ignored when you use sudo to start the mongod process as the root user.If so, I recommend fixing file & directory permissions so your mongod process can run as an unprivileged user.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I dont know what reason is but after many search on google, I reinstall MongoDB and it works fine. Thanks for your support.",
"username": "MAY_CHEAPER"
},
{
"code": "",
"text": "@Stennie_X I have the same issue when I am logged in as root to my system and then installing mongodb wondering why will I have permission issues ? and how do i give permission if at all to resolve this ? after i exec to my pod",
"username": "Shilpa_Agrawal"
},
{
"code": "[root@srv ~]# mongo\nMongoDB shell version v5.0.15\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n[root@srv ~]# netstat -an | grep 27017\n[root@srv ~]#\nation: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1,<my server ip for example 178.1.1.1> # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\nsecurity:\n#authorization: enabled\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n",
"text": "Hello friends\nI have the same problem and I have been searching for about 2 days and I did not get any result and my problem is not solved. Please help me to solve my problem.\nThe error that is displayed for me:i use Centos 7.6 with cpanel,litespeed,cloudlinuxI also checked the port, but it doesn’t show anything:and the /etc/mongod.conf file is as follows:",
"username": "Hesam_Ramezani"
},
{
"code": "",
"text": "Your mongod should be up & running for you to connect to it\nIf you installed it as service need to start the service first\nIf not installed as service you need to start your mongod manually from command line with appropriate parameters",
"username": "Ramachandra_Tummala"
},
{
"code": "[root@srv ~]# sudo service mongodb start\nRedirecting to /bin/systemctl start mongodb.service\nFailed to start mongodb.service: Unit not found.\n[root@srv ~]# service mongod status\nRedirecting to /bin/systemctl status mongod.service\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Mon 2023-02-27 16:57:53 +0330; 2h 20min ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 1047495 (code=exited, status=2)\n\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: Started MongoDB Database Server.\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: mongod.service: main process exited, code=exited, status=2/INVALIDARGUMENT\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: Unit mongod.service entered failed state.\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: mongod.service failed.\n",
"text": "The Mongo service is installed but does not start:and status:pleas help me.",
"username": "Hesam_Ramezani"
},
{
"code": "service mongod statussudo service mongodb start",
"text": "What is funny is that you use the correct name forservice mongod statusbut the wrong one tosudo service mongodb start",
"username": "steevej"
},
{
"code": "sudo service mongodb start",
"text": "sudo service mongodb startYou are right, but it will be redirected to service mongod start.",
"username": "Hesam_Ramezani"
},
{
"code": "Redirecting to /bin/systemctl start mongodb.serviceFailed to start mongodb.service: Unit not found.",
"text": "it will be redirected to service mongod start.To mongodb.service perhaps as indicated by the warning:Redirecting to /bin/systemctl start mongodb.servicewhich is still wrong, otherwise you would not get an error message that saysFailed to start mongodb.service: Unit not found.As mentioned, you are using the correct name, that is mongod when you are querying the status but the wrong name, that is mongodb when you try to start the service.",
"username": "steevej"
},
{
"code": "mv /opt/homebrew/var/mongodb /opt/homebrew/var/mongodb-old\nbrew reinstall [email protected]\nbrew services restart mongodb/brew/[email protected]\n",
"text": "A reinstall helped me as well, but I had to move mongodb out of the way first. Something about the directory being in place that was preventing mongo from grabbing it again. I’m using brew…",
"username": "leemr"
}
] | [] |
2021-08-12T13:35:14.822Z | null | 84,535 | Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector | Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector | [
{
"code": "Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}.\n\nClient view of cluster state is {type=REPLICA_SET, servers=[\n {address=mongo01:27017, type=UNKNOWN, state=CONNECTING*},\n {address=mongo02:27017, type=REPLICA_SET_SECONDARY, roundTripTime=2.5 ms, state=CONNECTED},\n {address=mongo03:27017, type=REPLICA_SET_SECONDARY, roundTripTime=1.1 ms, state=CONNECTED}\n];\n",
"text": "I am getting the following errorusing spring data mongoits a multitenant environment\nchange/create the mongoclient connection per request. Database/user/password change as tenant request.Issue is intermittent and keeps happening randomly.",
"username": "Kushal_Somkuwar"
},
{
"code": "mongo01",
"text": "Hi @Kushal_Somkuwar,Looking at the logging message - the driver is having trouble connecting to the primary node. The client view of the cluster state shows the state as seen from the Java driver (which spring data uses underneath).Connecting appears to take longer than the 30s timeout to connect to the mongo01 node. So things to check are the server logs - has there been a change in the replicaset topology - eg new primary and secondaries? Also have there been any networking issues between the application and the mongodb nodes?Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hi @Ross_Lawley ,so application connection with mongodb works fine initially but if the data json is big than it become slow for application and start showing the above error.we have checked the server logs there so no such of connection and I primary node is running fine there is no issue with cluster as well we suspect the same but why would it connect initially if there is a network issue.Kushal",
"username": "Kushal_Somkuwar"
},
{
"code": "",
"text": "Hi @Kushal_Somkuwar,If the driver initially connects then later cannot connect, then that points to some networking issue. It may be intermittent and not happen all the time. But from the drivers perspective it is timing out trying to select a server. The log message just shows you the current view of the servers after selection has failed, which is it is in the process of connecting to the mongo1 node.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hi @Ross_LawleyI also think the same but we are not getting any exception on mongodb server logs and application just get that error. Is there a way to identify what is exactly is the network issue ?",
"username": "Kushal_Somkuwar"
},
{
"code": "",
"text": "@Kushal_Somkuwar We are facing a similar issue and we are connecting to mongo from Google Cloud.\nDid you find out what was the issue? If so, how did you resolve it?",
"username": "Sandhya_Kripalani"
},
{
"code": "",
"text": "@Ross_Lawley Kindly help. Below is the stack trace\n“No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=xxxx, type=UNKNOWN, state=CONNECTING, exception={java.lang.OutOfMemoryError: Java heap space}}, ServerDescription{address=xxx, type=UNKNOWN, state=CONNECTING, exception={java.lang.OutOfMemoryError: Java heap space}}, ServerDescription{address=xxxx, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1401391508, setName=xxx’, canonicalAddress=xxx, hosts=[xxx], passives=[], arbiters=[], primary=xxxx’, tagSet=TagSet{[Tag{name=‘nodeType’, value=‘ELECTABLE’}, Tag{name=‘provider’, value=‘GCP’}, Tag{name=‘region’, value=‘EASTERN_US’}, Tag{name=‘workloadType’, value=‘OPERATIONAL’}]}, electionId=null, setVersion=6, lastWriteDate=Wed Apr 13 03:33:15 UTC 2022, lastUpdateTimeNanos=6819537012507}]}. Waiting for 30000 ms before timing out”",
"username": "Sandhya_Kripalani"
},
{
"code": "java.lang.OutOfMemoryError: Java heap space",
"text": "Hi @Sandhya_Kripalani,I noticed this java.lang.OutOfMemoryError: Java heap space in the error message.I’ve never seen that reported before in such a way (as part of the No server chosen exception) but its a sign that your JVM needs more resources.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "@Sandhya_Kripalani you will need to check mongoclient and the number of connections it is making with MongoDB it should be on the higher side. This issue comes when your application keep creating mongoclient object and not closing the existing mongoclient connections. Hope that will help",
"username": "Kushal_Somkuwar1"
},
{
"code": "org.springframework.data.mongodb.core.ReactiveMongoTemplate - Streaming aggregation: [{ \"$match\" : { \"type\" : { \"$in\" : [\"INACTIVE_SITE\", \"DEVICE_NOT_BILLED\", \"NOT_REPLYING_POLLING\", \"MISSING_KEY_TECH_INFO\", \"MISSING_SITE\", \"ACTIVE_CIRCUITS_INACTIVE_RESOURCES\", \"INCONSISTENT_STATUS_VALUES\", \"TEST_SUPERVISION_RANGE\"]}}}, { \"$project\" : { \"extractionDate\" : 1, \"_id\" : 0}}, { \"$group\" : { \"_id\" : null, \"result\" : { \"$max\" : \"$extractionDate\"}}}, { \"$project\" : { \"_id\" : 0}}] in collection kpi\norg.mongodb.driver.cluster - No server chosen by com.mongodb.reactivestreams.client.internal.ClientSessionHelper$$Lambda$1183/0x000000080129dd60@292878db from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out\nluster - No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out\ncom.obs.dqsc.interceptor.GlobalExceptionHandler - Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]\norg.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver - Resolved [org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]]\norg.springframework.web.servlet.DispatcherServlet - Completed 500 INTERNAL_SERVER_ERROR\n @Override\n public MongoClient reactiveMongoClient() {\n ConnectionString connectionString = new ConnectionString(\n \"mongodb://\" + username + \":\" + password +\n \"@\" + host1 + \":\" + port\n + \",\" + host2 + \":\" + port +\n \"/?authSource=\" + authSource +\n \"&replicaSet=\" + replica\n );\n\n MongoClientSettings mongoClientSettings = MongoClientSettings.builder()\n .applyToSslSettings(builder ->\n builder.enabled(true).invalidHostNameAllowed(true)\n )\n .applyToConnectionPoolSettings(\n builder -> builder.minSize(10)\n .maxSize(100)\n .maxWaitTime(8,TimeUnit.MINUTES)\n )\n .applyToSocketSettings(builder -> builder.applySettings(SocketSettings.builder().connectTimeout(5,TimeUnit.MINUTES).readTimeout(5,TimeUnit.MINUTES).build()))\n .applyConnectionString(connectionString)\n .build();\n\n return MongoClients.create(mongoClientSettings);\n }\n",
"text": "Hello @Ross_Lawley ,I’m having the same issue here, but I use reactive mongodb in a spring boot application.This is my logs messages:And this is my mongodb config:I tried to use different networks to avoid this problem, bit it always gives me the same error.\nCould you help me please?",
"username": "Hassan_EL-MZABI"
},
{
"code": "",
"text": "Hi Fellow developers,I have recently started seeing this issues again as soon as I switched to ReactMongoTemplate.\nIn my case I am doing multiple find operations as part of flux streams and the connection does not go through after first few subscriptions. It also many times bring the whole cluster down.Please tell me if someone managed to fix this issue.",
"username": "Prateek_Sumermal_Bagrecha"
},
{
"code": "",
"text": "Facing similar issue, any solution?",
"username": "sunny_tyagi"
},
{
"code": "",
"text": "Did you find the root cause as similar issue i am facing.",
"username": "sunny_tyagi"
},
{
"code": "Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}.serverSelectionTimeout",
"text": "Hi @sunny_tyagi welcome to the community!By “similar issue”, I assume you see an error message that looks like this:Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}.Is this correct?If yes, then this is a typical timeout error that is shared across all official drivers, where it tries to connect to the primary member of a replica set, and gives up when it cannot find one after 30 seconds (default). See MongoClient settings serverSelectionTimeout.Typically this is caused by network issues where the driver cannot reach the servers. It could be caused by network partition, security settings that prevent the client to reach the server (e.g. IP whitelisting issues), DNS issues, blocked port, among many others.If your app used to be able to connect without issues and now it cannot, then perhaps there is something different now. Things to check may include whether you’re connecting from the same network as before, whether there are DNS changes to the server, whether security settings was changed in the server, or any other network reachability issues that wasn’t there before.If you have checked everything and all seems to be in order, I would suggest you to consult your network administrator to troubleshoot your connectivity issues.Just to be complete, if anyone else is encountering this error in the future, it’s best to create a new topic describing the exact error encountered, along with more specific information such as MongoDB version, driver version, topology description, and the complete error message, since everyone’s situation is different and the same error may be the result of two very different causes Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "It was fixed my increasing the socket settings to the approximate time taken for cluster to come up again or reelect a leader. Also increase maxSize to like 10, spring could not handle 100 connections adequately. Now process is a bit slow but still more reliable.",
"username": "Prateek_Sumermal_Bagrecha"
},
{
"code": "",
"text": "could you please add the code snippet ? i am also facing the same issue",
"username": "Madhan_Kumar"
},
{
"code": "",
"text": "Hi, my configuration for the services that is using the mongodb service, using application.properties, to contact it with the uri mongodb://mongodbservice:27015/dbname. I found in the logs it is calling localhost:27017. Any clue?",
"username": "Muhammed_Alghwell"
}
] | [
"java",
"connecting",
"spring-data-odm"
] |
2022-03-16T10:12:35.360Z | null | 29,336 | Storing images in MongoDB | Storing images in MongoDB | [
{
"code": "",
"text": "hello community! i am new to MongoDB, i wanna know if mongodb is good for storing images and how i can store images in mongodb. I am using Node, Express framework and reactjs. Pls i need a perfect tutorial on how i can work with images. Thanks a bunch.",
"username": "Phemmyte_Electronic_Designs_Arduino"
},
{
"code": "",
"text": "Look at https://docs.mongodb.com/manual/core/gridfs/",
"username": "steevej"
},
{
"code": "destinationdestination",
"text": "No, MongoDB is not a good place for storing files. If you want to store files, you should use storages like Amazon S3 or Google Could Storage.The good practice is to store the files in a storage and then to just save the URL of the uploaded image in the MongoDB.When it comes to Node.js and Express, I would suggest multer package middleware, which is primarily used for uploading files. If you would use Amazon S3 for a storage, there is also a plug-in multer-s3 package that can be helpful.You can check this video on how to configure Multer and implement image upload on your Node.js server. Once the image is uploaded, Multer will return data to you with destination field included. You can then save that destination in the MongoDB.",
"username": "NeNaD"
},
{
"code": "",
"text": "i am very grateful. I hope it solves my problem.",
"username": "Phemmyte_Electronic_Designs_Arduino"
},
{
"code": "",
"text": "I had a task last week that required me to upload images to a database. I’ve S3 buckets to store images in the past but I tried MongoDB this time. So, I used a package called express-fileuploader, very similar to Multer, and uploaded my file to a temp directory in the server, from there I grabbed it using the native fs methods and uploaded it to the DB. The image file was uploaded as binary data, ultimately when getting the data you get it as a buffer base 64 if I’m not mistaken. JS can render using different methods.In conclusion, use an S3 bucket or google cloud storage. Uploading to MongoDB is not as complicated as it sounds, but you’ll definitely save some time using other services.",
"username": "Kevin_Grimaldi"
},
{
"code": "",
"text": "Wish people could be much of a use like you",
"username": "MyName_MyLastname"
},
{
"code": "",
"text": "can u explain why mongo is not a good place to store files does that include image files ?",
"username": "J_C4"
},
{
"code": "",
"text": "hey can u provide some code to do so i am new to node and mongo",
"username": "J_C4"
}
] | [
"node-js"
] |
2020-12-30T19:51:51.603Z | null | 9,595 | Atlas Search Autocomplete on an array of object | Atlas Search Autocomplete on an array of object | [
{
"code": "services: [\n\n{\n name: \"repair tire\"\n},\n{\n name: \"oil change\"\n}\n\n]\n",
"text": "HiI have an array of objects likeI set a MongoDB atlas full text search with autocomplete on “services.name” but when i search using autocomplete it doest not show anything.",
"username": "Pushaan_Nayyar"
},
{
"code": "",
"text": "Did you find a solution for this yet? I am experiencing the same kind of problem…",
"username": "Jakob_Noggler"
},
{
"code": "",
"text": "Hello!Thanks so much for your questions about Atlas Search. Can you please share the sample code of your search query and your index, so I can have a deeper look?Karen",
"username": "Karen_Huaulme"
},
{
"code": "{ \"name\" : \"Phani\", \"phones\" : [ { \"phoneNumber\" : \"123456789\" } ] }\n{ \"name\" : \"Yuva\", \"phones\" : [ { \"phoneNumber\" : \"987654321\" } ] }\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"phones\": {\n \"fields\": {\n \"phoneNumber\": {\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"document\"\n }\n }\n }\n}\n",
"text": "Hello Karen, we are facing similar kind of issue. Please find details below:Sample collection data:Autocomplete search index on phoneNumber:Query used:{ $search: {“autocomplete”: {“query”: “1234”,“path”: “phones.phoneNumber”}} }=> returns no result",
"username": "Yuva_Phani_Chennu"
},
{
"code": "",
"text": "I’m facing the same issue. It works on a simple nested document, but not for an array of nested documents. It works with “text” instead of “autocomplete” but that requires the exact string which is not really possible.",
"username": "Shadoweb_EB"
},
{
"code": "",
"text": "Hello everybody, same issue here Somebody find a solution ? Please help, because this is very impactful. Thanks in advance.",
"username": "Nicolas_Sandoz"
},
{
"code": "",
"text": "Please Karen, help us with this topic ",
"username": "Nicolas_Sandoz"
},
{
"code": "\"fuzzy\": { \"maxEdits\": 2 }",
"text": "Hello, everyone. Thank you so much for the code and index definitions. You are correct that this does not currently work, but this is a particular use case we are actively working on right now. @Shadoweb_EB, you could include a fuzzy object into the text- but you would have to get close to the end of the string for this to work.\n\"fuzzy\": { \"maxEdits\": 2 }\nSo this is not exactly what you need.\nHopefully, we will have a resolution soon.",
"username": "Karen_Huaulme"
},
{
"code": "",
"text": "How much time until it will be supported?",
"username": "Omer_Yaari"
},
{
"code": "",
"text": "Hello Karen, Thanks for your answer. Do you think it’s going to take a long time ? Do you have approximatively an idea on how much time ? Many of us are waiting for a solution to this problem. Also, could you posted here once a solution has been found ? Thank you in advance for your help.",
"username": "Nicolas_Sandoz"
},
{
"code": "",
"text": "Hello. It should be done this quarter. Please vote on the issue on feedback.mongodb.com, you should be notified when it is worked out. Allow autocomplete search on multiple fields using a wildcard path or by specifying multiple fields in the path – MongoDB Feedback EngineKaren",
"username": "Karen_Huaulme"
},
{
"code": "",
"text": "Hi, I voted on the linked enhancement request. There aren’t any updates on that ticket, though. Can you provide an update on progress? I would love this feature. Thanks!",
"username": "Kellen_Busby"
},
{
"code": "",
"text": "Yes, and it’d be great if this limitation were mentioned in the docs. I losta bunch of time on this today for nothing. Thanks.",
"username": "Warren_Wonderhaus"
},
{
"code": "",
"text": "Actually I was wrong it is in the docs, kind of. According to the index definition page (https://docs.atlas.mongodb.com/atlas-search/index-definitions/#array), you can’t even use autocomplete for an array of strings, let alone an array of docs:“You can’t use the autocomplete type to index fields whose value is an array of strings.”FWIW, I’ve got an array of ‘tags’ for my use case. Attempting to refactor it as a string that can be searched using autocomplete instead.",
"username": "Warren_Wonderhaus"
},
{
"code": "",
"text": "Hi, when is it planned to be possible to use autocomplete on array of objects?\nThanks in advance",
"username": "Alexander_Wieland"
},
{
"code": "",
"text": "Hi.\nWhen do you plan to implement autocomplete on arrays?\nThank you.",
"username": "V_C1"
},
{
"code": "autocompleteautocompleteautocomplete",
"text": "Hi @Alexander_Wieland and @V_C1 - Welcome to the community.At this point in time as per the autocomplete documentation:Alex, I believe the second dot point is related to your question. V_C1, if this is also the case for you too (array of documents), then please refer to the docs link. If not, I would create a new topic and advise your use case details including sample documents.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason.\nWhat is the point to index something if one cannot search for it?\nAre there any plans to implement such a search?\nThank you.",
"username": "V_C1"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"autocomplete\"\n },\n \"roles\": {\n \"dynamic\": true,\n \"type\": \"embeddedDocuments\",\n \"fields\": {\n \"externalId\": {\n \"type\": \"autocomplete\"\n },\n \"phone\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n }\n }\n}\ndb.entities.aggregate([\n {\n \"$search\": {\n \"compound\": {\n \"should\": [\n {\n \"autocomplete\": {\n \"query\": \"search query input\",\n \"path\": \"name\"\n }\n },\n {\n \"embeddedDocument\": {\n \"path\": \"roles\",\n \"operator\": {\n \"compound\": {\n \"should\": [\n {\n \"autocomplete\": {\n \"path\": \"roles.externalId\",\n \"query\": \"search query input\"\n }\n },\n {\n \"autocomplete\": {\n \"path\": \"roles.phone\",\n \"query\": \"search query input\"\n }\n }\n ]\n }\n }\n }\n }\n ]\n }\n }\n },\n {\n $addFields: {\n \"score\": { $meta: \"searchScore\" }\n }\n }])\n\n",
"text": "Hi Jason, thanks for your response. Since I had a hard time making it, here is an implementation of this use case that works for me (it might help others):The search index JSON on mongo AtlasThe query",
"username": "Robin_FERRE"
},
{
"code": "",
"text": "Thank you Robin.\nThis the solution.",
"username": "V_C1"
}
] | [] |
2022-05-19T08:20:18.630Z | null | 9,709 | I can not connect my mongodb compass with my cluster | I can not connect my mongodb compass with my cluster | [
{
"code": "",
"text": "querySrv ENODATA _mongodb._tcp.jvuwithjesus.ygys3.mongodb.net",
"username": "Jimmy_h_Vu"
},
{
"code": "mongodb+srv://",
"text": "Are you using a mongodb+srv:// connection string? If you are, it’s possible that the DNS servers of your internet service provider can’t resolve SRV records. Try switching your DNS to 8.8.8.8 and 8.8.4.4 (Google DNSs) and see if that works.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "This is my connection string : mongodb+srv://hellojesus1:[email protected]/jesusdatabase?retryWrites=true&w=majority\nI still have error:\nquerySrv ENODATA _mongodb._tcp.jvuwithjesus.ygys3.mongodb.net",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "The connection string is correct. You have toTry switching your DNS to 8.8.8.8 and 8.8.4.4 (Google DNSs) and see if that works.",
"username": "steevej"
},
{
"code": "",
"text": "I tried to switch to you dns but It still doesn’t work.\nIt still has same error.\nIs there any other way?\nPlease help !!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "Did you try long form of string instead of srv?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I tried to switch to you dns but It still doesn’t work.Please post a screenshot of the DNS configuration you tried.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,\nCan you show me the screen shot of google dns connection.\nI really don’t know where I replace google dns.\nPlease Help!!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "I used this connection string:\nmongodb+srv://hellojesus1:[email protected]/jesusdatabase?retryWrites=true&w=majority\nbut it doen’t work.\nPlease help!!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "Check this linkA free, global DNS resolution service that you can use as an alternative to your current DNS provider.Long form of connect string you can get from your Atlas account.Choose old version of shellIs password given in your connect string correct?\nI am getting different error2022-05-21T16:43:41.648+0530 I NETWORK [js] Marking host jvuwithjesus-shard-00-02.ygys3.mongodb.net:27017 as failed ::\ncaused by :: Location40659: can’t connect to new replica set master [jvuwithjesus-shard-00-02.ygys3.mongodb.net:27017],\nerr: Location8000: bad auth : Authentication failed.",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This is my creen shot of network ip:",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "This is not your DNS server setting. It is your Atlas network access list. Go to the Google Developers link provided by @Ramachandra_Tummala and follow the instructions to change your DNS settings.Alternatively, do as suggested by @Ramachandra_Tummala and use the Long form.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,\nI already connected your DNS settings 8.8.8.8 and 8.8.8.4 but It did not work and I could not use my internet to access any websites.\nIs there other ways?\nPlease help!!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "Did you do the dns settings on your laptop/pc from network-adapter?Did you try long form(old style) connect string method?\nFrom your Atlas account choose connect to shell and select old shell version in drop down\nWhat other options you see in your Compass?\nDoes fill individual fields option exist?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I already connected your DNS settings 8.8.8.8 and 8.8.8.4It is 8.8.4.4 and please help us help you by providinga screenshot of the DNS configuration you tried.",
"username": "steevej"
},
{
"code": "",
"text": "\nimage1920×1080 412 KB\n",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "I do not see where you specified 8.8.8.8 or 8.8.4.4",
"username": "steevej"
},
{
"code": "",
"text": "\nimage1920×1080 428 KB\n",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "That’s the correct DNS. But your previous screenshot shows that you are not using this Wi-Fi network interface. You have to set the DNS on the network interface you are using.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,\nI really don’t understand what you mean about network interface. Can you give an example or explainations about it.\nThanks for your time.\nPlease help!!!",
"username": "Jimmy_h_Vu"
}
] | [
"compass",
"atlas-cluster"
] |
2022-07-13T12:06:18.256Z | null | 8,451 | Cannot add authorization "enable" to mongod.conf yaml ccp-error | Cannot add authorization “enable” to mongod.conf yaml ccp-error | [
{
"code": "rror parsing YAML config file: yaml-cpp: error at line 31, column 6: end of map not found\n",
"text": "Hi fellows,\nI have a problem I cpuldn’t solve nor find a solution for yet. I needed to add authorization “enabled” to the mongod.conf to create and update users. If I try I get the following error:I couldn’t find a solution how to fix that . My mongod.conf you find here: https://pastebin.com/azXpEqiDMany thanks in advance,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "security.authorizationsecurity# Security\nsecurity:\n authorization: enabled\n",
"text": "Hi\nYou need to add security.authorization part to your config file - in your config file, security seems to be hashedI suggest check official MongoDB documentation",
"username": "Arkadiusz_Borucki"
},
{
"code": " systemctl restart mongod\nroot@docker:/etc# systemctl status mongod\n× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2022-07-13 14:36:40 CEST; 431ms ago\n Duration: 4.728s\n Docs: https://docs.mongodb.org/manual\n Process: 105115 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=14)\n Main PID: 105115 (code=exited, status=14)\n CPU: 791ms\n\nJul 13 14:36:35 docker systemd[1]: Started MongoDB Database Server.\nJul 13 14:36:40 docker systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "Hi,Thanks but that wasn’t it all yet. I removed the # restarted mongod but now I getif I try to create a new user I still get this error:\ndb.createUser({\n… user: “m103-admin”,\n… pwd: “m103-pass”,\n… roles: [\n… {role: “root”, db: “admin”}\n… ]\n… })\nuncaught exception: Error: couldn’t add user: command createUser requires authentication :Regards,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "Hi,\nmongod starts now, but the error cerating a new user still exists . How can I fix that?Many thanks in advance,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "Hi,\nCan you send you the current config file and output from mongod log ?\nDid you add your first, admin user before you enabled authorization ?",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "you need to add a first admin user before you enable authorization",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Hi, with or without authorization I get > Error: couldn’t add user: command createUser requires authentication :_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1367:11\n@(shell):1:1\nHere is the mongod.log you asked for:\nhttps://pastebin.com/EuARNZmBI hope it will be helpfulThanks in advance\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "Hi,the mongod.log you can find here as a file: https://ukleemann.net/index.php/apps/files/?dir=/Documents/FILES&fileid=277Regards,Uli",
"username": "Ulrich_Kleemann1"
},
{
"code": "admin",
"text": "I assume you are using standalone mongod instance, at least I could not see a replica set in your config file.\nYou can add the first admin user to your database with disabled access control, try the following steps:procedure is available online\nIf you enable access control before creating any user, MongoDB provides a localhost exception which allows you to create a user administrator in the admin database.",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Hi,I changed authorization from enable to disabled like this:security:\nauthorization: disabledthen I tried to add an admin user like that:use admin\ndb.createUser({then I got the know error againuncaught exception: Error: couldn’t add user: command createUser requires authentication :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1367:11\n@(shell):1:1\nwhat did I do wrong?Regards,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "# security:\n# authorization: enabled\n",
"text": "you need to disable access control like this (add # before security and authorization: enabled)now restart mongod and add first user",
"username": "Arkadiusz_Borucki"
},
{
"code": "Jul 13 14:36:35 docker systemd[1]: Started MongoDB Database Server.\nJul 13 14:36:40 docker systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.\nJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.ss -tlnp\nps -aef | grep [m]ongod\ndocker ps\n",
"text": "Your restart that indicates that it fails.And in the same post, you are able to connect and call db.createUser.This is inconsistent. If mongod does not start then you cannot connect. If you can connect then another instance is running or you are not connecting to the instance you think you are starting.FromJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.it looks like you are trying to start a docker instance. It is possible then when you connect you try to connect to a local instance, which is not using the configuration file you shared and is not running with authentication.To know more about your setup please share the output of the following commands:",
"username": "steevej"
},
{
"code": "",
"text": "mongo started (it is how I understand it), see",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Thanks, I saw that, but I have some doubts about the whole setup. So I am still interested to see the output of the commands.I would also like to see the command used to connect.",
"username": "steevej"
},
{
"code": "ss -ltnp \nss -ltnp\nState Recv-Q Send-Q Local Address:Port Peer Address:Port Process \nLISTEN 0 4096 127.0.0.1:8125 0.0.0.0:* users:((\"netdata\",pid=5347,fd=68)) \nLISTEN 0 4096 0.0.0.0:30783 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=254)) \nLISTEN 0 4096 127.0.0.1:19999 0.0.0.0:* users:((\"netdata\",pid=5347,fd=5)) \nLISTEN 0 4096 0.0.0.0:31808 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=43)) \nLISTEN 0 64 0.0.0.0:2049 0.0.0.0:* \nLISTEN 0 4096 0.0.0.0:10050 0.0.0.0:* users:((\"zabbix_agentd\",pid=1809,fd=4),(\"zabbix_agentd\",pid=1808,fd=4),(\"zabbix_agentd\",pid=1807,fd=4),(\"zabbix_agentd\",pid=1806,fd=4),(\"zabbix_agentd\",pid=1805,fd=4),(\"zabbix_agentd\",pid=1769,fd=4))\nLISTEN 0 4096 192.168.10.67:27011 0.0.0.0:* users:((\"mongod\",pid=129697,fd=14)) \nLISTEN 0 4096 127.0.0.1:27011 0.0.0.0:* users:((\"mongod\",pid=129697,fd=13)) \nLISTEN 0 4096 127.0.0.1:2947 0.0.0.0:* users:((\"systemd\",pid=1,fd=280)) \nLISTEN 0 4096 192.168.10.67:27012 0.0.0.0:* users:((\"mongod\",pid=44529,fd=14)) \nLISTEN 0 4096 127.0.0.1:27012 0.0.0.0:* users:((\"mongod\",pid=44529,fd=13)) \nLISTEN 0 4096 192.168.10.67:27013 0.0.0.0:* users:((\"mongod\",pid=44585,fd=14)) \nLISTEN 0 4096 127.0.0.1:27013 0.0.0.0:* users:((\"mongod\",pid=44585,fd=13)) \nLISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=279)) \nLISTEN 0 4096 127.0.0.1:27017 0.0.0.0:* users:((\"mongod\",pid=141575,fd=12)) \nLISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=248)) \nLISTEN 0 3 127.0.0.1:2601 0.0.0.0:* users:((\"zebra\",pid=1612,fd=25)) \nLISTEN 0 80 0.0.0.0:3306 0.0.0.0:* users:((\"mariadbd\",pid=1821,fd=31)) \nLISTEN 0 4096 0.0.0.0:59563 0.0.0.0:* users:((\"rpc.mountd\",pid=1990,fd=5)) \nLISTEN 0 511 127.0.0.1:6379 0.0.0.0:* users:((\"redis-server\",pid=1757,fd=6)) \nLISTEN 0 4096 127.0.0.1:6444 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=22)) \nLISTEN 0 4096 0.0.0.0:37261 0.0.0.0:* users:((\"rpc.statd\",pid=1989,fd=9)) \nLISTEN 0 10 127.0.0.1:5038 0.0.0.0:* users:((\"asterisk\",pid=7558,fd=7)) \nLISTEN 0 4096 0.0.0.0:47279 0.0.0.0:* users:((\"rpc.mountd\",pid=1990,fd=9)) \nLISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:((\"rpcbind\",pid=1204,fd=4),(\"systemd\",pid=1,fd=235)) \nLISTEN 0 4096 127.0.0.1:10256 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=257)) \nLISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=210)) \nLISTEN 0 4096 127.0.0.1:10258 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=201)) \nLISTEN 0 4096 127.0.0.1:10259 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=219)) \nLISTEN 0 4096 0.0.0.0:47219 0.0.0.0:* users:((\"rpc.mountd\",pid=1990,fd=13)) \nLISTEN 0 32 10.234.225.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=13324,fd=7)) \nLISTEN 0 32 192.168.12.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=3251,fd=6)) \nLISTEN 0 32 192.168.11.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=3216,fd=6)) \nLISTEN 0 32 192.168.100.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=3183,fd=6)) \nLISTEN 0 4096 127.0.2.1:53 0.0.0.0:* users:((\"dnscrypt-proxy\",pid=1749,fd=8),(\"systemd\",pid=1,fd=269)) \nLISTEN 0 128 127.0.0.1:8118 0.0.0.0:* users:((\"privoxy\",pid=2114,fd=4)) \nLISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:((\"sshd\",pid=1814,fd=3)) \nLISTEN 0 128 127.0.0.1:631 0.0.0.0:* users:((\"cupsd\",pid=1748,fd=8)) \nLISTEN 0 244 127.0.0.1:5432 0.0.0.0:* users:((\"postgres\",pid=1922,fd=4)) \nLISTEN 0 3 127.0.0.1:2616 0.0.0.0:* users:((\"staticd\",pid=1620,fd=12)) \nLISTEN 0 244 127.0.0.1:5433 0.0.0.0:* users:((\"postgres\",pid=1923,fd=6)) \nLISTEN 0 64 0.0.0.0:37849 0.0.0.0:* \nLISTEN 0 4096 127.0.0.1:10010 0.0.0.0:* users:((\"containerd\",pid=8951,fd=18)) \nLISTEN 0 244 127.0.0.1:5434 0.0.0.0:* users:((\"postgres\",pid=1849,fd=6)) \nLISTEN 0 4096 127.0.0.1:9050 0.0.0.0:* users:((\"tor\",pid=1851,fd=6)) \nLISTEN 0 4096 [::1]:8125 [::]:* users:((\"netdata\",pid=5347,fd=67)) \nLISTEN 0 64 [::]:2049 [::]:* \nLISTEN 0 4096 [::]:10050 [::]:* users:((\"zabbix_agentd\",pid=1809,fd=5),(\"zabbix_agentd\",pid=1808,fd=5),(\"zabbix_agentd\",pid=1807,fd=5),(\"zabbix_agentd\",pid=1806,fd=5),(\"zabbix_agentd\",pid=1805,fd=5),(\"zabbix_agentd\",pid=1769,fd=5))\nLISTEN 0 4096 [::]:53539 [::]:* users:((\"rpc.mountd\",pid=1990,fd=15)) \nLISTEN 0 4096 [::1]:2947 [::]:* users:((\"systemd\",pid=1,fd=279)) \nLISTEN 0 4096 [::]:42597 [::]:* users:((\"rpc.statd\",pid=1989,fd=11)) \nLISTEN 0 4096 *:10250 *:* users:((\"k3s-server\",pid=1958,fd=278)) \nLISTEN 0 80 [::]:3306 [::]:* users:((\"mariadbd\",pid=1821,fd=32)) \nLISTEN 0 4096 *:10251 *:* users:((\"k3s-server\",pid=1958,fd=218)) \nLISTEN 0 4096 *:6443 *:* users:((\"k3s-server\",pid=1958,fd=14)) \nLISTEN 0 511 [::1]:6379 [::]:* users:((\"redis-server\",pid=1757,fd=7)) \nLISTEN 0 4096 [::]:49839 [::]:* users:((\"rpc.mountd\",pid=1990,fd=7)) \nLISTEN 0 4096 [::]:111 [::]:* users:((\"rpcbind\",pid=1204,fd=6),(\"systemd\",pid=1,fd=237)) \nLISTEN 0 511 *:80 *:* users:((\"apache2\",pid=11166,fd=4),(\"apache2\",pid=11165,fd=4),(\"apache2\",pid=11159,fd=4),(\"apache2\",pid=3098,fd=4),(\"apache2\",pid=3097,fd=4),(\"apache2\",pid=3096,fd=4),(\"apache2\",pid=3095,fd=4),(\"apache2\",pid=3094,fd=4),(\"apache2\",pid=3093,fd=4),(\"apache2\",pid=3053,fd=4),(\"apache2\",pid=3044,fd=4))\nLISTEN 0 64 [::]:43635 [::]:* \nLISTEN 0 128 [::1]:8118 [::]:* users:((\"privoxy\",pid=2114,fd=5)) \nLISTEN 0 128 [::]:22 [::]:* users:((\"sshd\",pid=1814,fd=4)) \nLISTEN 0 128 [::1]:631 [::]:* users:((\"cupsd\",pid=1748,fd=7)) \nLISTEN 0 244 [::1]:5432 [::]:* users:((\"postgres\",pid=1922,fd=3)) \nLISTEN 0 244 [::1]:5433 [::]:* users:((\"postgres\",pid=1923,fd=5)) \nLISTEN 0 244 [::1]:5434 [::]:* users:((\"postgres\",pid=1849,fd=5)) \nLISTEN 0 4096 [::]:44699 [::]:* users:((\"rpc.mountd\",pid=1990,fd=11)) \nLISTEN 0 4096 *:6556 *:* users:((\"systemd\",pid=1,fd=296)) \ndocker -ps \n\ndocker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n\nps -aef |grep [m]ongodb\n\nps -aef |grep [m]ongodb\nuli 127446 45830 0 15:58 pts/16 00:00:04 mongosh mongodb://local\nmongodb 141575 1 0 16:49 ? 00:00:06 /usr/bin/mongod --config /etc/mongod.conf\n",
"text": "Hi,following you advice uncommentig authorization with # I get the same error when I try to create the m103-admin userhere are the outputs ofHope this will help.Thanks,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "mongod.conf",
"text": "can you also show your current mongod.conf file (with disabled access control)",
"username": "Arkadiusz_Borucki"
},
{
"code": "ping local\nps -aef | grep [m]ongodroot@docker",
"text": "More weird stuff.Output ofThe ss -tlnp output shows at least 3 instances of mongod listening.LISTEN 0 4096 192.168.10.67:27012 0.0.0.0:* users:((“mongod”,pid=44529,fd=14))\nLISTEN 0 4096 192.168.10.67:27013 0.0.0.0:* users:((“mongod”,pid=44585,fd=14))\nLISTEN 0 4096 127.0.0.1:27017 0.0.0.0:* users:((“mongod”,pid=141575,fd=12))But your ps output only shows:mongodb 141575 1 0 16:49 ? 00:00:06 /usr/bin/mongod --config /etc/mongod.confMay it is because you didps -aef |grep [m]ongodbrather thanps -aef | grep [m]ongodThe trailing b you added might the others not show if started by another user. This, or the output is redacted.With which user are you runningdocker psCan you do it asroot@docker",
"username": "steevej"
},
{
"code": "",
"text": "hi steeve,ps -aef | grep [m]ongod gives meps -aef |grep [m]ongodb\nuli 127446 45830 0 15:58 pts/16 00:00:04 mongosh mongodb://local\nmongodb 141575 1 0 16:49 ? 00:00:06 /usr/bin/mongod --config /etc/mongod.confthe docker ps command I run as root@docker but docker is no docker container just a hostname therefore it shows no running docker containersRegards,Uli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "hi steeve,thanks for you help. the 3 instnaces I made to create a local replica set following this tutorial from mongo m103-courseCloud: MongoDB Cloudthats what I want to do so the ps command gives you 3 instances on 3 different ports 27011 27012 and 27013my mongod.conf with diabled commented security section looks like this# mongod.conf# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/# Where and how to store data.\nstorage:# where to write logging data.\nsystemLog:# network interfaces\nnet:# how the process runs\nprocessManagement:#security:\n# authorization: disabledRegards,Uli",
"username": "Ulrich_Kleemann1"
},
{
"code": "ps -aef | grep 44529\nps -aef | grep 44585\nping local\n",
"text": "If the processes are listening they should show up with ps.ShareOnce againOutput ofRemove the trailing d fromps -aef | grep [m]ongodand do it as root.",
"username": "steevej"
}
] | [] |
2022-03-31T15:53:59.789Z | null | 7,109 | Mobile Bytes #9 : Realm React for React Native | Mobile Bytes #9 : Realm React for React Native | [
{
"code": "@realm/react@realm/reactrealm-js@realm/react@realm/reactrealmTask_id_idclass Task extends Realm.Object {\n _id!: Realm.BSON.ObjectId;\n description!: string;\n isComplete!: boolean;\n createdAt!: Date;\n\n static generate(description: string) {\n return {\n _id: new Realm.BSON.ObjectId(),\n description,\n createdAt: new Date(),\n };\n }\n\n static schema = {\n name: 'Task',\n primaryKey: '_id',\n properties: {\n _id: 'objectId',\n description: 'string',\n isComplete: { type: 'bool', default: false },\n createdAt: 'date'\n },\n };\n}\n\ncreateRealmContextRealmProvideruseRealmuseQueryuseObjectRealmProvidercreateRealmContextRealmProviderconst { RealmProvider, useRealm, useQuery } = createRealmContext({ schema: [Task] })\n\nexport default function AppWrapper() {\n return (\n <RealmProvider><TaskApp /></RealmProvider>\n )\n}\nuseRealmuseQueryTextInputTaskTaskFlatListuseCallbackStylesheet.createfunction TaskApp() {\n const realm = useRealm();\n const tasks = useQuery(Task);\n const [newDescription, setNewDescription] = useState(\"\")\n\n return (\n <SafeAreaView>\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <TextInput\n value={newDescription}\n placeholder=\"Enter new task description\"\n onChangeText={setNewDescription}\n />\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.create(\"Task\", Task.generate(newDescription));\n });\n setNewDescription(\"\")\n }}><Text>➕</Text></Pressable>\n </View>\n <FlatList data={tasks.sorted(\"createdAt\")} keyExtractor={(item) => item._id.toHexString()} renderItem={({ item }) => {\n return (\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <Pressable\n onPress={() =>\n realm.write(() => {\n item.isComplete = !item.isComplete\n })\n }><Text>{item.isComplete ? \"✅\" : \"☑️\"}</Text></Pressable>\n <Text style={{ paddingHorizontal: 10 }} >{item.description}</Text>\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.delete(item)\n })\n }} ><Text>{\"🗑️\"}</Text></Pressable>\n </View>\n );\n }} ></FlatList>\n </SafeAreaView >\n );\n}\n\nimport React, { useState } from \"react\";\nimport { SafeAreaView, View, Text, TextInput, FlatList, Pressable } from \"react-native\";\nimport { Realm, createRealmContext } from '@realm/react'\nclass Task extends Realm.Object {\n _id!: Realm.BSON.ObjectId;\n description!: string;\n isComplete!: boolean;\n createdAt!: Date;\n\n static generate(description: string) {\n return {\n _id: new Realm.BSON.ObjectId(),\n description,\n createdAt: new Date(),\n };\n }\n\n static schema = {\n name: 'Task',\n primaryKey: '_id',\n properties: {\n _id: 'objectId',\n\ndescription: 'string',\n isComplete: { type: 'bool', default: false },\n createdAt: 'date'\n },\n };\n}\n\nconst { RealmProvider, useRealm, useQuery } = createRealmContext({ schema: [Task] })\n\nexport default function AppWrapper() {\n return (\n <RealmProvider><TaskApp /></RealmProvider>\n )\n}\n\nfunction TaskApp() {\n const realm = useRealm();\n const tasks = useQuery(Task);\n const [newDescription, setNewDescription] = useState(\"\")\n\n return (\n <SafeAreaView>\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <TextInput\n value={newDescription}\n placeholder=\"Enter new task description\"\n onChangeText={setNewDescription}\n />\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.create(\"Task\", Task.generate(newDescription));\n });\n setNewDescription(\"\")\n }}><Text>➕</Text></Pressable>\n </View>\n <FlatList data={tasks.sorted(\"createdAt\")} keyExtractor={(item) => item._id.toHexString()} renderItem={({ item }) => {\n return (\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <Pressable\nonPress={() =>\n realm.write(() => {\n item.isComplete = !item.isComplete\n })\n }><Text>{item.isComplete ? \"✅\" : \"☑️\"}</Text></Pressable>\n <Text style={{ paddingHorizontal: 10 }} >{item.description}</Text>\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.delete(item)\n })\n }} ><Text>{\"🗑️\"}</Text></Pressable>\n </View>\n );\n }} ></FlatList>\n </SafeAreaView >\n );\n}\n\n@realm/react@realm/reactrealm-jsreact-native",
"text": "Greetings Realm folks,My name is Andrew Meyer, one of the engineers at realm-js, and I am making a guest post today in @henna.s Realm Byte column to help React Native users get started with our new library @realm/react. @realm/react is a module built on top of realm-js with the specific purpose of making it easier to implement Realm in React.I wanted to provide a quick example for React Native developers, to get an idea of how easy it is to get started using Realm using @realm/react. Therefore, I made an 80 line example of how to create a simple task manager using the library.You only need to have @realm/react and realm installed in your project and you will be good to go. If you aren’t using TypeScript, simply modify the Task class to not use types.Here is a breakdown of the code.Setting up and thinking about your model is the first step in getting any application off the ground. For our simple app, we are defining a Task model with a description, completion flag, and creation timestamp. It also contains a unique _id, which is the primary key of the Task model. It’s good to define a primary key, in case you want to reference a single Task in your code later on.We have also added a generate method. This is a convenience function that we will use to create new tasks. It automatically generates a unique _id, sets the creation timestamp, and sets the description provided by its argument.The schema property is also required for Realm. This defines the structure of the model and tells Realm what to do with the data. Follow Realm Object Model for more information.Here is the code for setting up your model class:The next part of the code is a necessary part in setting up your application to interact with Realm using hooks. In this code, we are calling createRealmContext which will return an object containing a RealmProvider and a set of hooks (useRealm, useQuery and useObject).The RealmProvider must wrap your application in order to make use of the hooks. When the RealmProvider is rendered, it will use the configuration provided to the createRealmContext to open the Realm when it is rendered. Alternatively, you can set the configuration through props on RealmProvider.Here is the code for setting up your application wrapper:Now that you have an idea of how to set everything up, let’s move on to the application. You can see right away that two of the hooks we generated are being used. useRealm is being used to perform any write operations, and useQuery is used to access all the Tasks that have been created.The application is providing a TextInput that will be used to generate a new Task. Once a Task is created, it will be displayed in the FlatList below. That timestamp we set up earlier is used to keep the list sorted so that the newest task is always at the top.In order to keep this code short, we skipped a few best practices. All the methods provided to the application should ideally be set to variables and wrapped in a useCallback hook, so that they are not redefinined on every re-render. We are also using inline styles to spare a few more lines of code. One would normally generate a stylesheet using Stylesheet.create.Here is the code for the application component:Here is the example in full, including all the required import statements.For more details on how to use @realm/react checkout our README and our documentation. If you are just getting started with React Native, you can also use our Expo templates to get started with minimal effort.And with that being said, what do you think about @realm/react? Any other examples you would like to see? We are working hard to make it easy to integrate realm-js with react-native, so let us know if you have any questions or feature requests!Happy Realming!",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "Wow, great development! I was waiting for a wrapper like this. To test it out it created a new Expo project based on the javascript Expo template you provided (not the TypeScript). The todo app runs flawlessly but once I enable the sync then it throws me a partitionValue must be of type ‘string’, ‘number’, ‘objectId’, or ‘null’ error. I tried several things to resolve it especially using the example code you provided for Native React. I got stuck. Any ideas what I do wrong here?",
"username": "Joost_Hazelzet"
},
{
"code": "return (\n <RealmProvider sync={{user: loggedInUserObject, partitionValue: \"someValue\"}} ><TaskApp /></RealmProvider>\n )\n",
"text": "Hi @Joost_Hazelzet, the partitionValue must be setup in Atlas.\n\nimage1420×786 73 KB\n\nIf you want to enable Sync, you will need to set a partitionValue. This can be arbitrary, but it is usually an ID that the data can be filtered on (as in this example, the userID).\nYou can dynamically set the partitionValue in the RealmProvider component:The value should mirror the same type that you have setup in Atlas.\nGlad you are enjoying the library! Let us know if this helps ",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "Hey Andrew, yes it was the missing partitionValue in the RealmProvider and and now the Todo app is running like a charm including the tasks synched to the Atlas database. Thank you. My code is available as Github repository https://github.com/JoostHazelzet/expoRealm-js in case somebody wants to use it.",
"username": "Joost_Hazelzet"
},
{
"code": "",
"text": "Can we setup multiple partition value in a single object?",
"username": "Zubair_Rajput"
},
{
"code": "",
"text": "Hi Andrew,The app I’m trying to build currently needs to keep track of ordering for a set of items. My data is structured such that a Group has a Realm.List. I am using the useObject() hook to get the Group, and then I’m trying to render the Group.items in a React Native FlatList. However, I can’t use the Realm.List type as an argument to the data prop of FlatList. I have tried using the useQuery() hook to get that same list of Items, but I need to preserve ordering in the Group, so what I really need is access to the Group so I can add/remove items from Group.items.Do you know of a way I can render a Realm.List?",
"username": "max_you"
},
{
"code": "",
"text": "Ah nevermind, I was able to get it working with react native’s VirtualizedList instead of using the FlatList which is more restrictive",
"username": "max_you"
},
{
"code": "RealmProvidercreateRealmContext",
"text": "You can create multiple RealmProviders with createRealmContext and have each using a different partition value. You will just have to export and use the hooks related to said partition.",
"username": "Andrew_Meyer"
},
{
"code": "Realm.ListFlatList",
"text": "I have written tests to do exactly what you have described. What exactly is the problem you are experiencing when trying to use a Realm.List in a FlatList? Feel free open an issue on github and provide some more information. I want to make sure that works ",
"username": "Andrew_Meyer"
},
{
"code": "keyExtractoritem: Item & Realm.ObjectItem",
"text": "I tried again and it works using a FlatList. I had just incorrectly specified an incorrect type for my parameter in my keyExtractor prop. I had specified the type of the variable as item: Item & Realm.Object instead of just Item. This was just a mistake on my part while I’m getting familiar with using Realm still! Thank you for the help!",
"username": "max_you"
},
{
"code": "",
"text": "Hi @Andrew_Meyer,This is a great example. Do you have something similar for implementing authentication? The example from Mongo talks uses React and I’m having trouble adapting it. A React Native example would be much appreciated.I’ll continue to search the forums to see if such an example already exists.Thanks,\nJosh",
"username": "Joshua_Barnard"
},
{
"code": "",
"text": "Thanks! Good information",
"username": "Ikbal_Sk"
},
{
"code": "",
"text": "The missing part in this tutorial is how do you access the Realm instance outside of the components (unless i’m missing something not everything is a component in a React app).\nWhen I tried the useRealm / useQuery / … hooks outside of components, I got the “Hooks can only be called inside of the body of a function component”.\nAnd if I try to create a new Realm() outside, either I get errors because the Realm is already opened with another schema OR the realm instance close randomly (Garbage Collected?).So I’m really curious to know how we are supposed to handle this.",
"username": "Julien_Curro"
},
{
"code": "RealmProvidercloseOnUnmountRealmProvidercloseOnUnmountRealmProvider",
"text": "@Julien_Curro Thanks for reaching out. This is doable. We have added a flag to the RealmProvider in version 0.6.0 called closeOnUnmount which can be set to false to stop the Realm from closing if you are trying to do anything with the Realm instance outside of the component. Without setting this flag, as soon as the RealmProvider goes out of scope, the realm instance instantiated therein will be closed.\nIt’s important to note, that with realm instances that are instantiated and point to the same realm, when one of them is closed, they all close. We will address this in a future version of the Realm JS SDK, but for now, the closeOnUnmount flag can be used to workaround this.\nAnother note, any of the hooks will only work within components rendered as children of the RealmProvider. Anything done with a realm instance outside of this provider must be done without hooks. This includes registering your own change listeners if you want to react to changes on data, which the hooks handle automatically.\nLet us know if you have any other issues ",
"username": "Andrew_Meyer"
},
{
"code": "closeOnUnmountimport { Realm } from '@realm/react';\nimport { realmConfig } from \"./schema\";\n\nclass RealmInstance {\n private static _instance: RealmInstance;\n public realm!: Realm;\n private constructor() {}\n\n public static getInstance(): RealmInstance { \n if (!RealmInstance._instance) {\n RealmInstance._instance = new RealmInstance();\n RealmInstance._instance.realm = new Realm(realmConfig);\n console.log('DB PATH:', RealmInstance._instance.realm.path)\n }\n if (RealmInstance._instance.realm.isClosed) {\n RealmInstance._instance.realm = new Realm(realmConfig);\n }\n\n return RealmInstance._instance;\n }\n}\n\nexport default RealmInstance.getInstance().realm;\n",
"text": "I am testing the closeOnUnmount prop, but I don’t know how I am supposed to get the 2nd non-context related realm instance.Am I supposed to use the realmRef prop ? Or should I justBefore your post I was trying with an ugly singleton like this :Edit: is there a discord somewhere to talk about Realm ? There’s a mongodb server, but nobody seems to know what Realm is ",
"username": "Julien_Curro"
},
{
"code": "realmConfig<RealmProvider {...realmConfig} closeOnUnmount={false}>new RealmRealm.opensync",
"text": "@Julien_Curro The singleton example you posted should work for this purpose. The realmConfig you are using here is spreadable onto <RealmProvider {...realmConfig} closeOnUnmount={false}>. If you open a Realm with the same config, you get a shared instance of the same Realm.\nThe only change I would suggest is to change new Realm to Realm.open. Realm.open is async and more suited for a Realm configured with sync settings.At the moment we do not have a discord, but you are not the first to ask about this. We are currently trying to merge Realm even closer the MongoDB, so hopefully in the near future the discord is more knowledgable on these topics.",
"username": "Andrew_Meyer"
},
{
"code": "closeOnUnmountimport { Realm } from '@realm/react';\nimport { realmConfig } from \"./schema\";\n\nconst RealmInstance = new Realm(realmConfig)\nexport default RealmInstance\n",
"text": "closeOnUnmount seems to be working and I finally simplified the singleton, since in TS I can export the instance instead of a class, it’s easier like that :Thanks for your time, and would be happy to ask other things to you on any discord you want ",
"username": "Julien_Curro"
},
{
"code": "",
"text": "Hello @Julien_Curro ,Thank you for raising your questions and thanks @Andrew_Meyer for taking the time to help our fellow member @Julien_Curro , please feel free to ask questions and share your solutions in the community forum in related categories so we have a knowledge house of information everyone can benefit from We as a community appreciate your contributions Happy Coding!Cheers, \nHenna\nCommunity Manager, MongoDB",
"username": "henna.s"
},
{
"code": "<RealmProvider \nsync={{\n flexible: true,\n onError: (_, error) => {\n console.log(error);\n },\n }}\n>\n<SubscriptionProvider>\n<TaskApp />\n<SubscriptionProvider>\n</RealmProvider>\n",
"text": "Hey @henna.s , @Andrew_MeyerI am using realm with device sync. Can I set up a SubscriptionProvider instead of doing the subscription directly in the screens?",
"username": "Siso_Ngqolosi"
},
{
"code": "",
"text": "@Siso_Ngqolosi This is allowed and looks like a good setup. Subscriptions are globally defined, so if you apply them in any section of you app it will effect all components using the Realm.\nLet us know if you have any issues!",
"username": "Andrew_Meyer"
}
] | [
"node-js",
"react-native",
"react-js",
"mobile-bytes"
] |
2021-08-26T21:48:16.961Z | null | 16,746 | IsConnected not a function in next.js app | IsConnected not a function in next.js app | [
{
"code": "",
"text": "I’m starting to use the mongodb driver for next.js. I followed the example of How to Integrate MongoDB Into Your Next.js App | MongoDB but first I found that the example code doesn’t work anymore but I manage to make it work with a few tweaks.Now I want to use it in my own app, so I went and npm install mongodb and everything is working, I can query the database and get the results, but the weird thing is the function isConnected is not working. I get a message saying: “TypeError: client.isConnected is not a function”.I’m new with next.js and mongodb, so I don’t get what is happening, in the demo app with the example the function works without problems, but in my app I get this message.Can somebody help me how to make it work? This is the line with the error:const isConnected = await client.isConnected()Thanks everybody.",
"username": "Alejandro_Chavero"
},
{
"code": "import { connectToDatabase } from '../lib/mongodb'\nconst { client } = await connectToDatabase()\nimport clientPromise from '../lib/mongodb'\nconst client = await clientPromise\nclient.isConnected()import { MongoClient } from 'mongodb'\n\nconst uri = process.env.MONGODB_URI\nconst options = {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n}\n\nlet client\nlet clientPromise\n\nif (!process.env.MONGODB_URI) {\n throw new Error('Please add your Mongo URI to .env.local')\n}\n\nif (process.env.NODE_ENV === 'development') {\n // In development mode, use a global variable so that the value\n // is preserved across module reloads caused by HMR (Hot Module Replacement).\n if (!global._mongoClientPromise) {\n client = new MongoClient(uri, options)\n global._mongoClientPromise = client.connect()\n }\n clientPromise = global._mongoClientPromise\n} else {\n // In production mode, it's best to not use a global variable.\n client = new MongoClient(uri, options)\n clientPromise = client.connect()\n}\n\n// Export a module-scoped MongoClient promise. By doing this in a\n// separate module, the client can be shared across functions.\nexport default clientPromise\n",
"text": "Hey Alejandro -We just pushed a huge update to the next.js repo that changes how a couple of things work, so the issue is def NOT on your end. You’ll just have to make a few minor tweaks.The updated code is here:canary/examples/with-mongodbThe React Framework. Contribute to vercel/next.js development by creating an account on GitHub.But essentially, the way we import the library has changed.Instead of importingand callingTo get our connection to the database. We’ll instead import the library like so:and to access a database in our getServerSideProps, we’ll do:and now the client.isConnected() function should work.The updated library itself looks like this:I would check out the code here for further instructions:\nhttps://github.com/vercel/next.js/blob/canary/examples/with-mongodb/pages/index.jsPlease let me know if that helps! I will update the blog post in the next few days as well to reflect these changes.Thanks!",
"username": "ado"
},
{
"code": "",
"text": "Thanks for your help Ado, sadly I still get the error that IsConnected is not a function. If I take out the app is connecting and querying the DB without problem.",
"username": "Alejandro_Chavero"
},
{
"code": "",
"text": "Yes, I have same problem with Alejandro even though I followed exact code as posted. I hope it will be notified by more developers and be fixed soon.",
"username": "Brandon_Lee"
},
{
"code": "mongodb@^[email protected]",
"text": "In the Next.js example, they used mongodb@^3.5.9.mongo@latest, which is 4.1.1 as of today, does not have isConnected method on MongoClient as far as I see. So if you just installed mongo in your own project, this might be it.",
"username": "nefil1m"
},
{
"code": "",
"text": "Well it’s nice to know I’m not the only one having problems with the isConnected not being a function.",
"username": "Jeff_Woltjen"
},
{
"code": "",
"text": "thanks and do you know an alternative to this function?",
"username": "Alejandro_Chavero"
},
{
"code": "getServerSidePropsexport async function getServerSideProps(context) {\n\n let isConnected;\n try {\n const client = await clientPromise\n isConnected = true;\n } catch(e) {\n console.log(e);\n isConnected = false;\n }\n\n return {\n props: { isConnected },\n }\n}\n",
"text": "to solve the “isConnected is not a function” error, change the function getServerSideProps to:Thanks,\nRafael,",
"username": "Rafael_Green"
},
{
"code": "",
"text": "But how I create /api routing for api calls? I need to create API routes inside /page directory, but it will not accept mongodb.js. Can you share the code of API call, please?",
"username": "Il_Chi"
},
{
"code": "import clientPromise from \"../../../lib/mongodb\";\nexport default async (req, res) => {\n const client = await clientPromise\n const { fieldvalue } = req.query\n const database = client.db('databasename');\n const userdb = await database.collection('collectionname')\n .find({ \"<field>\": `${ fieldvalue }` })\n .project({ \"_id\": 0 })\n .toArray();\n res.json(userdb)\n}",
"text": "This is how I’m calling my api routes:",
"username": "Alejandro_Chavero"
},
{
"code": "",
"text": "For that problem, the standard solution is to import clientPromise because versions higher than 3.9/4.0 do not have \"import {Mongoclient} \" command.Then also, if you want to use {MongoClient} then,Now it will work",
"username": "Bhagya_Shah"
},
{
"code": "",
"text": "Hi,\nIs there a special reason that we export clientPromise? If I export it, I need to access to db object and pick the database I want to work with in each route. So I don’t want to repeat myself. Of course I can find a quick solution, before spending time I just wanted to learn what is the reason we do it like that.\nThanks",
"username": "Yalcin_OZER"
}
] | [
"node-js",
"next-js"
] |
2022-10-31T09:03:37.901Z | null | 5,092 | Realm Flexible Sync not working properly in Swift | Realm Flexible Sync not working properly in Swift | [
{
"code": "class Comment: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) public var _id: String = UUID().uuidString\n @Persisted public var ownerId: String\n @Persisted public var comment: String\n}\nlet app = App(id: \"xxxxxxx\")\n@main\nstruct TestSyncApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n if let app = app {\n AppView(app: app)\n .frame(maxWidth: .infinity, maxHeight: .infinity)\n }\n else {\n Text(\"No RealmApp found!\")\n }\n }\n }\n}\n\nstruct AppView: View {\n @ObservedObject var app: RealmSwift.App\n\n var body: some View {\n if let user = app.currentUser {\n let config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"Comment\") != nil {\n return\n }\n else {\n subs.append(QuerySubscription<Comment>(name: \"Comment\"))\n }\n })\n OpenSyncedRealmView()\n .environment(\\.realmConfiguration, config)\n .environmentObject(user)\n }\n else {\n LoginView()\n }\n }\n}\n\nstruct OpenSyncedRealmView: View {\n @AutoOpen(appId: \"xxxxxxx\", timeout: 4000) var realmOpen\n\n var body: some View {\n switch realmOpen {\n case .connecting,.waitingForUser,.progress(_):\n ProgressView(\"waiting ...\")\n case .open(let realm):\n RealmContentView()\n .environment(\\.realm, realm)\n case .error(let error):\n Text(\"opening realm error: \\(error.localizedDescription)\")\n }\n }\n}\nstruct RealmContentView: View {\n @Environment(\\.realm) var realm: Realm\n @ObservedResults(Comment.self) var comments\n @State var subscribeToEmail: String = \"\"\n\n var body: some View {\n VStack {\n HStack {\n Spacer()\n Text(\"SubscribeTo:\")\n TextField(\"Email\", text: $subscribeToEmail)\n Button {\n if let user = app.currentUser {\n Task {\n do {\n _ = try await user.functions.subscribeToUser([AnyBSON(subscribeToEmail)])\n }\n catch {\n print(\"Function call failed - Error: \\(error.localizedDescription)\")\n }\n }\n }\n } label: {\n Image(systemName: \"mail\")\n }\n Text(\"New Comment:\")\n Button {\n let dateFormatter : DateFormatter = DateFormatter()\n dateFormatter.dateFormat = \"yyyy-MMM-dd HH:mm:ss.SSSS\"\n let date = Date()\n let dateString = dateFormatter.string(from: date)\n\n let newComment = Comment()\n newComment.comment = \"\\(app.currentUser!.id) - \\(dateString)\"\n newComment.ownerId = app.currentUser!.id\n $comments.append(newComment)\n } label: {\n Image(systemName: \"plus\")\n }\n }\n .padding()\n if comments.isEmpty {\n Text(\"No Comments here!\")\n }\n else {\n List {\n ForEach(comments) { comment in\n Text(comment.comment)\n .listRowBackground(comment.ownerId == app.currentUser!.id ? Color.white: Color.green)\n }\n }\n .listStyle(.automatic)\n }\n }\n }\n}\n",
"text": "Hi there,we try to implement the “Restricted News Feed” example in Swift from the Flexible Sync Permissions Guide.We couldn’t check out the example via the template, so we had to copy the backend related things from the guide to a newly created app. (enabled email authentication, added the authentication trigger, the function to subscribe to someone else, enabled custom userdata etc…)The backend seems to work as it should.On client side we implemented a simple comment Object, with String Data to display:User now can log in to the client and create comments - and sync it (works as expected). And they could subscribe to other users comments like in the example from the guide (with the same server function as in the guide). On the server we can see that the data is correct.The problem now: on client side nothing happens when a user subscribes to another users comment. The other users comments won’t be synced…Only when the user deletes his app from the device, reinstalls it and logs in with the same user as before - then he can see his comments and the comments from the user he subscripted to.Here is the code for initializing the realm in SwiftUI:And the code for displaying the comments:Did we miss something? Do we have to manage/handle subscriptions in a different way? Or have we found a bug?Thanks for any help!",
"username": "Dan_Ivan"
},
{
"code": "rerunOnOpentruelet config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"Comment\") != nil {\n return\n }\n else {\n subs.append(QuerySubscription<Comment>(name: \"Comment\"))\n }\n}, rerunOnOpen: true)\n",
"text": "Hey Dan - there are a couple of things you might try to resolve this. I haven’t looked in detail at this permissions model so I’m not sure which is your best fix.One option is to try setting the rerunOnOpen parameter to true in your Sync Configuration. So:This forces the subscriptions to recalculate every time the app is opened, and might resolve the need to delete/reinstall. But it would still require the user to close the app and re-open it to see the updated subscriptions. Let me know if that works, and if not, I may have some other suggestions to try.",
"username": "Dachary_Carey"
},
{
"code": "rerunOnOpen: true",
"text": "Hey Dachary!many thanks for the answer!\nWe had tried the rerunOnOpen: true before and now again on your advice.Unfortunately that doesn’t change anything. The other users’s data remains unsynced until the user deletes and reinstalls the app.We look forward to other suggestions!Kind regards,\nDan",
"username": "Dan_Ivan"
},
{
"code": "let config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"Comment\") != nil {\n return\n }\n else {\n subs.append(QuerySubscription<Comment>(name: \"Comment\"))\n }\n}, clientResetMode: .recoverUnsyncedChanges())\n",
"text": "Ok, Dan - I’ve dug a little deeper here and have another suggestion to try. The docs for the restricted news feed state:changes don’t take effect until the current session is closed and a new session is started.I believe this is because we are effectively setting a new session role role for the user.The Swift SDK provides APIs to suspend and resume a Sync session. I believe that if you suspend and then resume Sync, that will trigger a session role change and the user should be able to sync the new comments. This may trigger a client reset, so you’ll want to set a client reset mode in your sync configuration. This would look something like:This should then trigger the realm to re-sync relevant comments based on the updated subscription.We do have some work planned in the future to improve this process, but I think this is roughly what you’ll need to do to handle it currently.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Hey Dachary,unfortunately, setting the client reset doesn’t do anything. same sync behavior as before.we took a closer look at the realm logs: we found nothing that indicates a client reset. It seems that the client reset is never triggered and maybe that is the underlying problem?",
"username": "Dan_Ivan"
},
{
"code": "",
"text": "Are you finding there is no client reset after suspending and resuming sync? I would not expect the client reset to occur until after the Sync session stops and a new one starts. This makes me wonder if there is still an active Sync session and that’s why the role change isn’t happening & new relevant docs are not getting synced.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "No, we can’t find in the log anything that indicates a client reset. We call the function to subscribe to the comments of another user, then we suspend on the synced realm, then we resume the synced realm - we see in the logs that the first sync session is closed and disconnected and that another sync session is started - but nothing about a client reset.",
"username": "Dan_Ivan"
},
{
"code": "",
"text": "Got it. A client reset may not be expected in this case - I know we’ve been doing work around reducing the need for client resets under certain scenarios. It’s also possible this isn’t a role change, and I’m conflating this with another permissions scenario.I’ll tag our engineers and see if I can find any other suggestions for you.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Are there any news regarding this issue?We are working on a project that relies on similar functionality and this issue is currently blocking our development.\nWould it be advisable to book an appointment with an engineer at MongoDB (Flex Consulting) to solve this quickly?Thank you!",
"username": "Dan_Ivan"
},
{
"code": "",
"text": "@Dan_Ivan What’s the question? I will say that we have released a Client Reset with Automatic Recovery across all of our SDKs which should perform this recovery and reset logic for you automatically under the hood -",
"username": "Ian_Ward"
},
{
"code": "owner_idownerId",
"text": "I did check with engineering, and they spotted that the docs & backend use owner_id as the queryable field, but the snippet you’ve posted here uses ownerId. If that’s the issue, you should be seeing in the logs that the field used in permissions is not a queryable field.If that doesn’t solve the issue, then some debugging directly with our engineers is probably the right next step.",
"username": "Dachary_Carey"
},
{
"code": "owner_id",
"text": "I’m facing the same issue. And I use proper owner_id",
"username": "Alexandar_Dimcevski"
},
{
"code": "",
"text": "@Alexandar_Dimcevski What error are you getting?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "No error. But rerun on open doesn’t run when I close and open the app",
"username": "Alexandar_Dimcevski"
},
{
"code": "",
"text": "We had a support session with a MongoDB support engineer and found out that this is not yet fully implemented in the Swift SDK: currently the realm won’t change automatically if - as in the example above - the flexible sync permissions change due to a change in the custom data (session role change).\nThe only safe way at the moment - according to the MongoDB engineer - is to “log out and log in the user mandatorily”. Then the data is correctly synchronized again with the new permissions.He also told us: “It should be noted that the feature to handle role changes without client reset is under active consideration and is being developed now it may take some time to be available for the\ngeneral public.”It would be very interesting to hear from official MongoDB staff here when we can expect this feature to be implemented - because it is not reasonable that users have to log out and log in again to get their data synced correctly!",
"username": "Dan_Ivan"
},
{
"code": "\"ChatMessage\": [\n {\n \"name\": \"anyone\",\n \"applyWhen\": {},\n \"read\": {},\n \"write\": {\n \"authorID\": \"%%user.id\"\n }\n }\n],\n",
"text": "Is there a reason to store the subscriptions inside of custom_data? Perhaps it’d work if you made a synced User object instead of using using custom_data. The RChat example does this.flex-syncContribute to realm/RChat development by creating an account on GitHub.edit: Also it seems the RChat app lets anyone read chats, so maybe that doesn’t actually work.",
"username": "Jonathan_Czeck"
},
{
"code": "",
"text": "Hi, is there any update on this issue?",
"username": "Dominik_Hait"
},
{
"code": "",
"text": "@Dachary_Carey Can you help here?",
"username": "Dominik_Hait"
},
{
"code": "",
"text": "Dominik,\nToday, permissions are cached per sync session. as @Dan_Ivan mentioned previously. While this is an area of planned improvement, a permission change (for instance, a change to custom user data) is only guaranteed to take effect after the sync session restarts (ie disconnect and reconnect / log out and log back in).We would recommend changing the subscription rather than the permissions to change what the user sees.",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "@Sudarshan_Muralidhar If I do as you say, there is absolutely Zero data security. Anyone can see anything.That’s completely nonviable, dangerous and irresponsible to suggest!Take the collaboration examples off the site until you actually support collaboration.The collaboration approach suggested in the official docs does not work for reasons you wrote. Why do you suggest people do this?-Jon",
"username": "Jonathan_Czeck"
}
] | [
"swift",
"flexible-sync"
] |
2021-01-29T04:24:34.272Z | null | 15,313 | Error Throwing Unrecognized pipeline stage name: '$search' | Error Throwing Unrecognized pipeline stage name: ‘$search’ | [
{
"code": "db.collection.aggregate([{\n$search: {\n\ttext: {\n\t\tquery: 'multi word query',\n\t\tpath: [...some fields],\n\t},\n}}]);\nUncaught exception: Error: command failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Unrecognized pipeline stage name: '$search'\",\n\t\"code\" : 40324,\n\t\"codeName\" : \"Location40324\"\n} : aggregate failed :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\ndoassert@src/mongo/shell/assert.js:18:14\n_assertCommandWorked@src/mongo/shell/assert.js:583:17\nassert.commandWorked@src/mongo/shell/assert.js:673:16\nDB.prototype._runAggregate@src/mongo/shell/db.js:266:5\nDBCollection.prototype.aggregate@src/mongo/shell/collection.js:1012:12\nDBCollection.prototype.aggregate@:1:355\n@(shell):1:1\n",
"text": "I am using MongoDB atlas search.It is always says…below error.Error:Could you help to unblock.",
"username": "Merlin_Baptista_B"
},
{
"code": "",
"text": "Hi @Merlin_Baptista_B,Welcome to MongoDB communityHave you created an atlas search index on that collection?What is the atlas cluster version?It is supported from 4.2+:Get started quickly with Atlas Search by loading sample data to your cluster, creating a search index, and querying your collection.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I have the same problem, I have created search index on Atlas cloud via search UI (default search index for now)\nHow can I resolve this issue?",
"username": "7b55b5f9a91383655fec26662aab12c"
},
{
"code": "",
"text": "Problem solved. I was using local DB instead of the Atlas one",
"username": "7b55b5f9a91383655fec26662aab12c"
},
{
"code": "",
"text": "Check if you are looking at the right DB, on the local DB search can’t work as far as I understand",
"username": "7b55b5f9a91383655fec26662aab12c"
}
] | [] |
2022-07-24T17:37:56.698Z | null | 30,358 | Unable to connect db because of "throw new MongoParseError('Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"');" | Unable to connect db because of “throw new MongoParseError(‘Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”’);” | [
{
"code": "throw new MongoParseError('Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"');\n ^\n\nMongoParseError: Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\" ...\nconst { ApolloServer, gql } = require(\"apollo-server\");\n\nconst { MongoClient } = require(\"mongodb\");\n\nconst dotenv = require(\"dotenv\");\n\ndotenv.config();\n\nconst { DB_URI, DB_NAME } = process.env;\n\n// A schema is a collection of type definitions (hence \"typeDefs\")\n\n// that together define the \"shape\" of queries that are executed against\n\n// your data.\n\nconst typeDefs = gql`\n\n # Comments in GraphQL strings (such as this one) start with the hash (#) symbol.\n\n # This \"Book\" type defines the queryable fields for every book in our data source.\n\n type Book {\n\n title: String\n\n author: String\n\n }\n\n # The \"Query\" type is special: it lists all of the available queries that\n\n # clients can execute, along with the return type for each. In this\n\n # case, the \"books\" query returns an array of zero or more Books (defined above).\n\n type Query {\n\n books: [Book]\n\n }\n\n`;\n\nconst books = [\n\n {\n\n title: \"The Awakening\",\n\n author: \"Kate Chopin\",\n\n },\n\n {\n\n title: \"City of Glass\",\n\n author: \"Paul Auster\",\n\n },\n\n];\n\n// Resolvers define the technique for fetching the types defined in the\n\n// schema. This resolver retrieves books from the \"books\" array above.\n\nconst resolvers = {\n\n Query: {\n\n books: () => books,\n\n },\n\n};\n\nconst start = async () => {\n\n const client = new MongoClient(DB_URI, {\n\n useNewUrlParser: true,\n\n useUnifiedTopology: true,\n\n });\n\n await client.connect();\n\n const db = client.db(DB_NAME);\n\n // The ApolloServer constructor requires two parameters: your schema\n\n // definition and your set of resolvers.\n\n const server = new ApolloServer({\n\n typeDefs,\n\n resolvers,\n\n csrfPrevention: true,\n\n cache: \"bounded\",\n\n });\n\n // The `listen` method launches a web server.\n\n server.listen().then(({ url }) => {\n\n console.log(`🚀 Server ready at ${url}`);\n\n });\n\n};\n\nstart();\nDB_URI =\n\n \"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\";\n\nDB_NAME = Cluster0;\n",
"text": "I am the beginner in programming. Please help me out, dear experienced programmers, if you can.\nNow I am trying to do a simple To-do app. And I want to use there database. I am stuck for already 12 hours on the stage where it is needed to connect database.\nI have the following error after running the command “node index.js”:I have the next code in index.js:And the file .env:Thank you in advance! ",
"username": "olena_dunamiss"
},
{
"code": "DB_URIconsole.log(DB_URI)new MongoClient",
"text": "Hi @olena_dunamiss welcome to the community!So a big welcome to the coders club! What I gathered so far is that you’re trying to use Apollo GrapQL to connect to a MongoDB database, to create a todo list app. Is this correct?Could you provide some more details:If you’re just starting to code, I would suggest you to learn MongoDB in isolation first (without Apollo or GraphQL) by following MongoDB and Node.js Tutorial - CRUD Operations.Regarding MongoDB + Node, I would also suggest you take a look at the free MongoDB University courses: M001 MongoDB Basics and M220JS MongoDB for JavaScript Developers (although please note that M220JS assumes some familiarity with Javascript/Node).Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "console.log(DB_URI)undefined\nC:\\Users\\Svetl\\test\\node_modules\\mongodb-connection-string-url\\lib\\index.js:9\n return (connectionString.startsWith('mongodb://') ||\n ^\n\nTypeError: Cannot read properties of undefined (reading 'startsWith')``",
"text": "Thank you so much for your response and the materials that you added!\nThe tutorial: Build a GraphQL API with NodeJS and MongoDB (Full-stack MERN Tutorial ) - YouTube\nI added console.log(DB_URI) and I have the following:",
"username": "olena_dunamiss"
},
{
"code": "",
"text": "did you install dotenv package, I almost had same error and I found out I need to install dotenv package so I can access to .env variables dotenv in npm registry",
"username": "Neck_Abdullah"
},
{
"code": "\"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\"\n",
"text": "I had this same issue and I resolved it by simply removing the “;” at the end of the connection string. So you connection string should just be:That is without the semi-colon at the end.",
"username": "Ubong_Udotai"
},
{
"code": "",
"text": "At this moment, I believe I should just quit my job and become a chef… Thank you, kind sir.",
"username": "Adam_Machowczyk"
},
{
"code": "",
"text": "Lol… What are you currently working as btw… thanks Adam!",
"username": "Ubong_Udotai"
},
{
"code": "",
"text": "This was my problem when the mongoose.connect() in my index.js file would not read my process.env. value.Inside my .env file I had the semi-colon at the end as I usually do when writing JavaScript code:\nMONGO_URL=“mongodb+srv:…”;After removing the semi-colons in my .env files thing were finally working in index.js.Thank you for this solution.",
"username": "Brian_A"
},
{
"code": "",
"text": "I don’t know if you are still having this problem, but as one other person said you need to remove the semi-colon ( ; ) from your variables in your .env file. That should make the process.env work properly.",
"username": "Brian_A"
},
{
"code": "",
"text": "WTH!!! Thanks man. Can’t understand coding anymore. 2 full days trying to sort this…",
"username": "jnr_wadeya"
},
{
"code": "",
"text": "Awesome, I removed the ; from end of line of .env and it’s resolved. Thank you;",
"username": "amastaneh"
},
{
"code": "",
"text": "Fantastic, I removed the semicolon at the end of the line .env, and the problem is solved. Thank you;",
"username": "amastaneh"
},
{
"code": "",
"text": "After five long hours of troubleshooting, I see this response. And Guess what happened.",
"username": "Tohirul_Islam"
},
{
"code": "",
"text": "After 14 hours of troubleshooting, I finally found the solution to my problem. I was able to find the solution to my problem by searching the documentation for “connection string”.See - https://www.mongodb.com/docs/atlas/troubleshoot-connection/#special-characters-in-connection-string-password",
"username": "T.R_Methu_N_A"
},
{
"code": "",
"text": "This solved my problem, too - thank you! Very glad I stumbled upon your solution.",
"username": "Natalie_Gillam"
}
] | [
"node-js",
"connecting",
"atlas-cluster",
"graphql"
] |
2020-12-20T20:02:18.134Z | null | 9,877 | Could I connect my front-end directly to Atlas? | Could I connect my front-end directly to Atlas? | [
{
"code": "",
"text": "Hi, the community,I’m developing a small web app that can store my movies watch list. I’m storing the data to the browser’s local storage and going to make it online so that I can access it from anywhere.Because the application is really small, I don’t want to set up a back-end between the web app and database cloud. So as the title, could I connect my front-end directly to the Atlas without using any back-end? Or could I just call APIs to CRUD data in Atlas directly from my clients?Thank you,",
"username": "IM_Coder"
},
{
"code": "",
"text": "Hi @IM_Coder,Welcome to MongoDB community.For this use case a Realm application with a realm-web sdk is the perfect solution.https://docs.mongodb.com/realm/get-started/introduction-webThis is the most easy and optimized way to focus on your front-end app while having an elastic managed backend. Realm apps have a generous free tier therefore you should be good.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "MongoDB just launched the Atlas Data API, which allows you to perform CRUD operations on your Atlas data through simple HTTP requests https://www.mongodb.com/docs/atlas/api/data-api/",
"username": "Drew_Beckmen"
},
{
"code": "",
"text": "Using the data api is it secure enough to put the end points on the front end? for writing to database?",
"username": "Rishi_uttam"
},
{
"code": "curl",
"text": "putting end-points to the front end? unless it is “read-only”, that would mean anyone would have access to your database and result in havoc.For read-only purposes, this direct connection is great. you would just be working on a functional/dynamic web content such a stock-market following.But for write access, it is a whole lot of story. Security is the main concept here and you would not want free access to your database. The usual way is to have your own back-end API to communicate with your database and so keep your database credentials secure (as much possible as your host settings allows).Using Realm or Data API is best to use with your IoT devices as they mostly don’t have enough memory to put whole drivers. They can communicate with basic TCP requests to write to the database. As long as you keep those devices in safe places, you can have as many as you want to write to the database. or at least give them access to a very limited resource.Or you may do the same base access from the terminal anytime with tools like good old curl. Or write PoC API fast without going into driver details (Javascript or Python is great for prototyping). Same applies to front-end; for PoC purposes create temporary access points.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for that detailed reply. That all makes sense and was how I thought the data api would work on the front end. My goal in finding a new way to connect to my database was speed more than anything else.The other option is to use the realm-web sdk and this would solve my problem of calling the db from the front end. However its a big js bundle as I learned in one of my previous projects. Downloading realm 132kb unminified just to establish a connection and send data/authenticate is a bit much.When i tried the data api, this is what I found:\nI used Atlas 3rd party http triggers as a proxy to call the DATA API, but under my tests in the past it takes about 3-4 seconds to return a response .The whole cycle using the data api takes about 2.5 seconds on the quick side.Realm SDK is much faster in my tests but the bundle is quite big.So I am going back to what I did in the past which is simply use AWS lambda HK region running the official mongo db driver, which connects directly to my database. When the function is warm i get results in under 50 ms. – I wish this was the same with the data api, but its up to 10x as long. I hope Mongo Atlas can figure this out. Even calling a cloudflare edge worker still results in the same delay when calling the data api (as its still in preview, the data api end points are only in a few regions)Sorry i went on a tangent, but duration and bundle size is our current problem, surely writing less backend code would come in super handy. Currently we using realm-web sdk for larger projects with good internet connections, but for smaller projects with mobile connections realm sdk is too much weight.Anyway thanks for your help,.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Rishi_uttam"
}
] | [] |
2021-05-14T16:41:51.733Z | null | 6,157 | Using custom domain with graphql endpoint | Using custom domain with graphql endpoint | [
{
"code": "www.MYDOMAIN.com\n(which links to MYAPP.mongodbstitch.com)\nhttps://realm.mongodb.com/api/client/v2.0/app/MYAPP/auth/providers/.../login\nhttps://realm.mongodb.com/api/client/v2.0/app/MYAPP/graphql\nhttps://www.MYDOMAIN.com/auth/providers/.../login\nhttps://www.MYDOMAIN.com/graphql\nhttps://API.MYDOMAIN.com/auth/providers/.../login\nhttps://API.MYDOMAIN.com/graphql\nhttps://www.MYDOMAIN.NET/auth/providers/.../login\nhttps://www.MYDOMAIN.NET/graphql\n",
"text": "Hi allAs per the instructions here I know it is possible to setup a Mongo Realm app, setup static hosting on the mongodbstitch domain and then link a custom domain name so that users see my app hosted at:My question:In order to fully “brand” my app is it also possible to link the same or another custom domain to my exposed GraphQL endpoint, so that instead of seeing the default mongodb.com domains for auth and endpoint:the GraphQL auth and endpoint would be something like one of the following:1 - The same custom domain but expose a custom path (would also have to work with SPA app and be ignored by the SPA routing):2 - Use a different subdomain to provide both the auth and endpoint:3 - Use a different but related domain for the auth and endpoint:Thank you",
"username": "mba_cat"
},
{
"code": "",
"text": "Not currently but it has been requested on our feedback portal here - Ability to expose API via our own custom DNS entry – MongoDB Feedback EngineIt may be addressed in a future initiative that looks to overhaul our exposed data api’s - stay tuned",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian is there a timeframe for the overhaul and if I develop against the existing data api will there be a way to migrate to the new api when they are ready?",
"username": "mba_cat"
},
{
"code": "",
"text": "Hey @mba_cat I can’t give you good timeline on this because we’re still in the planning phase, but I can post on this thread with any updates. If you’d like to give more specific feedback around GraphQL/APIs on Realm and anything you’d like to see in the service, you can shoot me an email at [email protected]",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Having a custom domain for realm http triggers is important for maintaining a good branded experience. users see the task bar when calls are made to third party services, best to be on a sub domain i.e. api.xxxx.com. Lambda and all other serverless function services allow for this, sadly we cant move to realm functions unless this happenes.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Hi @Sumedha_Mehta1 has the planning phase completed ? Any timelines on when this will be available ?",
"username": "V_P"
},
{
"code": "",
"text": "Hi all, bumping this thread as it seems there will be quite a few reverse proxies out there to handle this need. It would be great to understand if this is planned for or if we need to plan for a workaround with a reverse proxy ourselves. Cheers",
"username": "Andy_O_Connor"
},
{
"code": "",
"text": "Hi Andy, would be great if you can document how you did this with a reverse proxy?",
"username": "Rishi_uttam"
}
] | [
"graphql"
] |
2023-03-04T08:23:45.531Z | null | 10,595 | This.options = options ? {}; | This.options = options ? {}; | [
{
"code": "",
"text": "\nimage1878×1053 160 KB\n\nI run this backend on windows and kali linux worked perfectly but when I tried on ubuntu then it showed an error. could anybody help me?",
"username": "Abu_Said_Shabib"
},
{
"code": "",
"text": "my code is\n\nimage959×1071 95.7 KB\n",
"username": "Abu_Said_Shabib"
},
{
"code": "",
"text": "Hello @Abu_Said_Shabib, Welcome to the MongoDB community forum,I think you are using mongodb npm’s latest driver version 5, and there are Build and Dependency Changes, Just make sure the below thing,Minimum supported Node version\nThe new minimum supported Node.js version is now 14.20.1",
"username": "turivishal"
},
{
"code": "",
"text": "Hi, Were you able to find a solution to this problem? I haven’t been lucky so far.\nI have been facing server connection problems due to things (not sure if it’s because of a firewall or vpn or zscaler) but I Started facing this specific problem yesterday after I removed MongoDB from package.json file, deleted package-lock and node-modules and then reinstalled everything, then installed mongo db again.\nI am facing the same error and I haven’t been able to find other questions related to it on the the internet maybe Im looking for the wrong thing in the wrong places.\nplease help.attaching SS of my package.json file.I am using node version 19.8.0\nScreenshot 2023-03-23 at 6.33.00 AM1018×1550 133 KB\n",
"username": "Najib_Shah"
},
{
"code": "",
"text": "I can’t solve it but it’s probably problem on Ubuntu network | Firewall | Version or security related issues. Because in other versions of linux it working well. Like I’m tested on kali linux, fedora and zorin os.",
"username": "Abu_Said_Shabib"
},
{
"code": "",
"text": "I’m facing this problem on MacOS Ventura 13.1, my colleague with his MacOS is not facing this issue. I don’t understand what the problem could be.Firewall seems like a probable cause since my colleague has been here longer he must have different access than my laptop (he doesn’t remember, I tried asking him what special permissions his laptop might have).",
"username": "Najib_Shah"
},
{
"code": "",
"text": "my problem is fixed now,\nit was due to my company’s internet monitoring software blocking the ports I needed to visit. took me a long time to figure it out because the software is newly implemented and the team handling it wasn’t aware that it blocks ports too, not just website.",
"username": "Najib_Shah"
},
{
"code": "",
"text": "Reciently I find a solution. The comand “npm i mongoose” today install mongoose version 7.0.1., so I deleted directory “node_modules” and after I rewrite the file “package.json” to replace dependencies as “mongoose”: “^5.1.2”, and ejecute command “npm i dependences” and node.js will reinstall mongoose in a past version. this works fine for my.",
"username": "Ramirez_Gomar_Sergio_Jose"
},
{
"code": " this.options = options ?? {};",
"text": "Hello @Najib_Shah / @Abu_Said_Shabib / @Ramirez_Gomar_Sergio_JoseWelcome to the MongoDB Community forums As @turivishal mentioned the minimum supported Node.js version is now 14.20.1 for MongoDB Node.js Driver v5. So, please upgrade the node version to 14.20.1 or higher to resolve the issue.The new minimum supported Node.js version is now 14.20.1However, you can refer to my response here where I’ve explained the cause of the this.options = options ?? {}; error.I hope it helps!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you so much.\nmy problem is resolve due to your solution.",
"username": "manu_vats"
},
{
"code": "",
"text": "3 posts were split to a new topic: Getting Error This.options = options ? {};",
"username": "Kushagra_Kesav"
},
{
"code": "^5.1.2 ^7.5.2",
"text": "This worked for me, I downgraded the mongoose version to ^5.1.2 from ^7.5.2",
"username": "Talha_Maqsood"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [
"node-js"
] |
2021-11-23T21:57:33.943Z | null | 7,184 | > error: TypeError: Cannot access member 'db' of undefined | > error: TypeError: Cannot access member ‘db’ of undefined | [
{
"code": "",
"text": "exports = function(arg){\nvar collection = context.services.get(“Cluster0”).db(“Database”).collection(“alldata”);return collection.find({});\n};error:\nTypeError: Cannot access member ‘db’ of undefinedI keep getting this error. How can I fix this?",
"username": "tkdgy_dl"
},
{
"code": "",
"text": "I’ve also tried “mongo-atlas” too.",
"username": "tkdgy_dl"
},
{
"code": "",
"text": "Cannot access member ‘db’ of undefinedCheck this link",
"username": "Ramachandra_Tummala"
},
{
"code": "mongodb-atlas",
"text": "Hi @tkdgy_dl ,You should try mongodb-atlas instead.If that doesn’t work can you go into function UI and copy paste the URL in the browser hereThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@tkdgy_dl\nI find the answer your problem.\nIn Atlas UI > Triggers, you have to press “Link” button after choice your Link Data Source(s).The reason for the error in TypeError is not connected “Link Data Source(s)”, so occur ‘db’ of undefined.",
"username": "_BE_Austin"
},
{
"code": "",
"text": "yoo!, Thank You bro.",
"username": "M4A1_N_A"
}
] | [] |
2020-04-03T12:52:52.593Z | null | 7,775 | Real Time: MongoDB data to AWS QuickSight | Real Time: MongoDB data to AWS QuickSight | [
{
"code": "",
"text": "My current Database size is of around ~4 GB and it is increasing day by day. I have a requirement to integrate it with BI service to give clear insights of the data to the stakeholders.Is it possible to integrate MongoDB directly with AWS QuickSight? There are options of importing CSVs, JSON etc, But as data growth is high it doesn’t look feasible and I am looking for real time solutions.Note: I am running MongoDB on AWS ec2 instances, not using Atlas currently.What I would like to know is:Is there a best way to connect to MongoDB cluster with AWS QuickSight with Realtime outputs? if not, What are the other best possible solutions?",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Hey Viraj, did you find the answer to your question?",
"username": "Naser_Zandi"
},
{
"code": "",
"text": "Hi @Naser_Zandi,Welcome to the community.Yes. I tried couple of options and I deployed it successfully.There are several third party connectors available in the market, You can use that. The way, I did was, I wrote a script to get necessary data from Mongo and storing it on RDS. You can try storing files on S3 too. And finally, you can plug that in as the data source in the Quicksight. You can also use AWS DMS to load the data for Quicksight dashboard.I hope this is helpful.Cheers!\nViraj",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Hi,\nI’m interested in your experience thus far. I have a similar situation (using an SaaS product in AWS GovCloud set on a mongo database). I’m interested to know how effective the solution in report generation and how the issue with non indexed, not relational data is overcome.",
"username": "Robert_Staurowsky"
},
{
"code": "",
"text": "Hello, I’m also interested in your experience on this topic, would you like please reach send me a private message on skype : b.hamichi\nThanks in advance\nBR",
"username": "Boualem_HAMICHI"
},
{
"code": "",
"text": "In this post, you will learn how to use Amazon Athena Federated Query to connect a MongoDB database to Amazon QuickSight in order to build dashboards and visualizations. Amazon Athena is a serverless interactive query service, based on Presto, that...",
"username": "Nithin_Alex"
}
] | [] |
2021-08-30T07:00:30.513Z | null | 9,246 | How to use aggregation for large collection? | How to use aggregation for large collection? | [
{
"code": "model.aggregate([\n {\n $match: {\n parent: 0\n }\n },\n {\n $graphLookup: {\n from: appId + \"_\" + viewName + \"s\",\n startWith: \"$id\",\n connectFromField: \"id\",\n connectToField: \"parent\",\n depthField: \"level\",\n as: \"data\"\n }\n },\n {\n $unset: [\n \"data._id\",\n \"data.createdAt\",\n \"data.updatedAt\",\n \"data.updateBy\"\n ]\n },\n {\n $unwind: {\n path: \"$data\",\n preserveNullAndEmptyArrays: true\n }\n },\n {\n $sort: {\n \"data.level\": -1\n }\n },\n {\n $group: {\n _id: \"$id\",\n parent: {\n $first: \"$parent\"\n },\n value: {\n $first: \"$value\"\n },\n type: {\n $first: \"$type\"\n },\n data: {\n $push: \"$data\"\n }\n }\n },\n {\n $addFields: {\n data: {\n $reduce: {\n input: \"$data\",\n initialValue: {\n level: -1,\n presentData: [],\n prevData: []\n },\n in: {\n $let: {\n vars: {\n prev: {\n $cond: [\n {\n $eq: [\n \"$$value.level\",\n \"$$this.level\"\n ]\n },\n \"$$value.prevData\",\n \"$$value.presentData\"\n ]\n },\n current: {\n $cond: [\n {\n $eq: [\n \"$$value.level\",\n \"$$this.level\"\n ]\n },\n \"$$value.presentData\",\n []\n ]\n }\n },\n in: {\n level: \"$$this.level\",\n prevData: \"$$prev\",\n presentData: {\n $concatArrays: [\n \"$$current\",\n [\n {\n $mergeObjects: [\n \"$$this\",\n {\n data: {\n $filter: {\n input: \"$$prev\",\n as: \"e\",\n cond: {\n $eq: [\n \"$$e.parent\",\n \"$$this.id\"\n ]\n }\n }\n }\n }\n ]\n }\n ]\n ]\n }\n }\n }\n }\n }\n }\n }\n },\n {\n $addFields: {\n data: \"$data.presentData\"\n }\n }\n ]).allowDiskUse(true)",
"text": "I am using mongo Atlas M10. I want to transform all document data to formatted tree data by using the aggregate framework. It is only working for a certain limit of documents.\nI am getting below error in a large number of documents.\n“MongoError: BSONObj size: 20726581 (0x13C4335) is invalid. Size must be between 0 and 16793600(16MB)”I already set allowDiskUse to true. It is still getting that error.May I have a solution for that error?below are my aggregate stages:",
"username": "edenOo"
},
{
"code": "",
"text": "Hi Eden,Looks like some stages of your pipeline are hitting the 16MB BSON limit . My understanding is that you need to make sure that the output of every stage in your pipeline is less than 16MB (in your example, one of your stages is blocked from outputting ~21MB).When I hit this problem for the first time I also felt like Mongo’s documentation could’ve done a better job at proposing possible solutions / examples of solutions (instead of just stating the limit).Xavier Robitaille\nFeather Finance",
"username": "Xavier_Robitaille"
},
{
"code": "{ $project: { \"<field1>\": 0, \"<field2>\": 0, ... } } // Return all but the specified fields\n",
"text": "For reference, one possible solution to consider is to add a $project stage early to exclude fields that are non-essential to your query, and which use up part of the 21MB.Exclude Fields with $project:",
"username": "Xavier_Robitaille"
},
{
"code": "explainfields",
"text": "add a $project stage early to exclude fields that are non-essentialThis is (usually) bad advice. You never need to do this, because the pipeline already analyzes which fields are needed and only requests those fields from the collection.You can see that by using explain - see fields section.Asya",
"username": "Asya_Kamsky"
},
{
"code": "$graphLookup",
"text": "@edenOo if you’re doing $graphLookup from a view, could you reduce the size of the view? I see you are unsetting several fields that come from the view, but excluding them upfront may limit the size of the entire tree enough to fit into 100MBs.Note that $graphLookup is fundamentally limited to 100MBs and cannot spill to disk. So if the expected tree structure is bigger than 100MBs then you’ll probably need to find a different solution to your problem. Maybe give us more details about what the data is and what exactly you are trying to do with it?Asya",
"username": "Asya_Kamsky"
},
{
"code": "project: {\"activities\": 0}//-----------------------------------------------------------------------------------------------------\n// get user and all its activityBuckets(without actual activities otherwise would bust 16MB)\n//-----------------------------------------------------------------------------------------------------\ndb.users.aggregate( [\n { $match: { 'email': '[email protected]' } }, \n { $lookup: {\n from: \"activitybuckets\",\n let: { users_id: \"$_id\"},\n pipeline: [ \n { $project: {\"activities\": 0} },\n {\n $match: {\n $expr: { \n $and: [\n { $eq: [ '$$users_id', \"$user\" ] },\n }\n }\n }\n ],\n as: \"activities\"\n } },\n] );\n",
"text": "You never need to do this, because the pipeline already analyzes which fields are needed and only requests those fields from the collection.@Asya_Kamsky thanks for stepping in. The reason why I stumbled on Eden’s post is that I had this problem myself, and I was looking for the best way to solve it.Let me describe my use case, our web app handles stock market transactions (aka account “activities”), and we use a Bucket Pattern, because many of our users have several 20k-50k transactions/activities in their account (i.e. several times the 16MB limit). Our use case is pretty much exactly the example described in these two articles by Justin LaBreck.I was getting BSON size limit error messages from the following query when querying users with many activityBuckets. I added the project: {\"activities\": 0} stage and it solved my problem. The query returns all of the user’s activityBuckets, but without the actual activity data (ie. only the activityBucket high level data).Would you have recommended a different solution?",
"username": "Xavier_Robitaille"
},
{
"code": "activities$project$graphLookup$project$unset$graphLookup",
"text": "The problem you describe is quite different - without the project in the inner pipeline you’re saying you want all of the document to be in the activities array and that would make it bigger than legal BSON size for single document. $project is needed when you have to tell the engine what fields you want/need. In the original answer you imply that it’s necessary to exclude fields not essential to your query which the engine will attempt to determine by itself based on which fields you are using in the pipeline and which you are returning to the client. So it’s important to specify correctly (at the end of the pipeline usually is the best place) which fields you want back. Sometimes in complex sub-pipelines where you need to specify that is less obvious.In the case of $graphLookup like the original question, there is a limitation that means there’s no way to use $project or $unset other than by creating a view to make the collection you’re doing $graphLookup in smaller.Hope this is more helpful, rather than more confusing Asya",
"username": "Asya_Kamsky"
},
{
"code": "$project$matchlocalFieldforeignField",
"text": "P.S. I would put $project after $match inside the sub-pipeline, by the way. I also would use the localField/foreignField syntax, as of 5.0.0 you can still add more stages (due to https://jira.mongodb.org/browse/SERVER-34927 being implemented).",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "@Asya_Kamsky thank you so much!It is much clearer now.",
"username": "Xavier_Robitaille"
},
{
"code": "$graphLookup$project$unset$graphLookup$graphLookup",
"text": "because the pipeline already analyzes which fields are needed and only requests those fields from the collection.In the case of $graphLookup like the original question, there is a limitation that means there’s no way to use $project or $unset other than by creating a view to make the collection you’re doing $graphLookup in smaller.I came to the same problem. Thanks for the explanation! My problem is solved. But I think it would be much nicer if we can apply some pipeline before $graphLookup, instead of creating a view.",
"username": "Yun_Hao"
},
{
"code": "",
"text": "Hello We have more 300M documents in one of our collection we have written an aggregation pipeline to separate the records which are in one year range. Pipeline is pretty simple juat have two stage",
"username": "Venkata_Sai_Gopi"
},
{
"code": "",
"text": "Without seeing the real pipeline that you are doing it is impossible for us to pin-point any issues you might get.",
"username": "steevej"
}
] | [
"aggregation"
] |
2022-05-22T14:57:14.086Z | null | 13,933 | MissingSchemaError: Schema hasn't been registered for model UserAddress.address | MissingSchemaError: Schema hasn’t been registered for model UserAddress.address | [
{
"code": "const mongoose = require(\"mongoose\");\n\nconst orderSchema = new mongoose.Schema(\n {\n user: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"User\",\n required: true,\n },\n addressId: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"UserAddress.address\",\n required: true,\n },\n totalAmount: {\n type: Number,\n required: true,\n },\n items: [\n {\n productId: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Product\",\n },\n payablePrice: {\n type: Number,\n required: true,\n },\n purchasedQty: {\n type: Number,\n required: true,\n },\n },\n ],\n paymentStatus: {\n type: String,\n enum: [\"Pending\", \"Completed\", \"Cancelled\", \"Refund\"],\n required: true,\n },\n paymentType: {\n type: String,\n enum: [\"CoD\", \"Card\", \"Wire\"],\n required: true,\n },\n orderStatus: [\n {\n type: {\n type: String,\n enum: [\"Ordered\", \"Packed\", \"Shipped\", \"Delivered\"],\n default: \"Ordered\",\n },\n date: {\n type: Date,\n },\n isCompleted: {\n type: Boolean,\n default: false,\n },\n },\n ],\n },\n { timestamps: true }\n);\n\nmodule.exports = mongoose.model(\"Order\", orderSchema);\nconst mongoose = require(\"mongoose\");\n\nconst addressSchema = new mongoose.Schema({\n name: {\n type: String,\n required: true,\n trim: true,\n min: 10,\n max: 60,\n },\n mobileNumber: {\n type: String,\n required: true,\n trim: true,\n },\n pinCode: {\n type: String,\n required: true,\n trim: true,\n },\n locality: {\n type: String,\n required: true,\n trim: true,\n min: 10,\n max: 100,\n },\n address: {\n type: String,\n required: true,\n trim: true,\n min: 10,\n max: 100,\n },\n cityDistrictTown: {\n type: String,\n required: true,\n trim: true,\n },\n state: {\n type: String,\n required: true,\n required: true,\n },\n landmark: {\n type: String,\n min: 10,\n max: 100,\n },\n alternatePhone: {\n type: String,\n },\n addressType: {\n type: String,\n required: true,\n enum: [\"home\", \"work\"],\n required: true,\n },\n});\n\nconst userAddressSchema = new mongoose.Schema(\n {\n user: {\n type: mongoose.Schema.Types.ObjectId,\n required: true,\n ref: \"User\",\n },\n address: [addressSchema],\n },\n { timestamps: true }\n);\n\nmongoose.model(\"Address\", addressSchema);\nmodule.exports = mongoose.model(\"UserAddress\", userAddressSchema);\nconst Order = require(\"../models/order\");\nconst Cart = require(\"../models/cart\");\nconst Address = require(\"../models/address\");\nconst Product = require(\"../models/product\");\n\nexports.getOrders = (req, res) => {\n Order.find({ user: req.user._id })\n .select(\"_id paymentStatus paymentType orderStatus items addressId\")\n .populate(\"items.productId\", \"_id name productImages\")\n .populate(\"addressId\")\n .exec((error, orders) => {\n if (error) {console.log(error) \n return res.status(400).json({ error });}\n if (orders) {\n res.status(200).json({ orders });\n }\n });\n \n};\n",
"text": "Been pulling my hair out for hours now, just can’t figure out why the field refuses to populate. What I want to do is return the AddressId field populated with values instead of just an ID, but nothing I’ve tried works, none of the solutions I found do anything.If you need any other code from the project, I will update the question. Any help is highly appreciated.Order Model:Address Model:Code that runs the query:",
"username": "Marin_Vilic"
},
{
"code": "ref:\"UserAddress.Address\"ref:\"UserAddress.address\"mongoose.model(\"Address\", addressSchema);address: [addressSchema]ref:\"Address\"mongoose.model(\"Address\", addressSchema);\"UserAddress\"",
"text": "First, I know nothing about mongoose so what I suggest might be completely wrong.You register addressSchema as the Address model. Everywhere ref: is used, it looks like a model name, rather than the field name of another model.So I would try first to use ref:\"UserAddress.Address\" rather than ref:\"UserAddress.address\", that is the name you use inmongoose.model(\"Address\", addressSchema);rather than the name you use inaddress: [addressSchema]If that fails, I would try ref:\"Address\" because you domongoose.model(\"Address\", addressSchema);You might need to export it like you do for \"UserAddress\".",
"username": "steevej"
},
{
"code": "",
"text": "I also got the same error as you. How did you fix it?",
"username": "Minh_Hi_u_Nguy_n"
},
{
"code": "module.exports = mongoose.model( \"user\" , userSchema );\ncustomer: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"User\"\n}\n",
"text": "I got the exact same error,\nmy models were being called correctly everything was correct exceptuserModel.jsorderModel.jsspelling error in the reference. 1 hour wasted on a capital letter left out in the past ",
"username": "Adam_Hannath"
},
{
"code": "mongoose.createConnectionyourConnection.model(modelName)refconst { productsDbConnection, usersDbConnection } = require(\"../db\") ; // connections imported from db file //\n\nconst UserSchema = new mongoose.Schema({\n // ...\n cart: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: productsDbConnection.model('women_collection') // Make sure you have registered the women_collection model in you Db//\n },\n ],\n});\n",
"text": "Hi, my name is Mohsin Hassan Khan, If you’re encountering the “MissingSchemaError” while trying to reference a collection from another MongoDB database, follow these steps:Here’s an example:Thanks, Please let me know if this helps anyone.",
"username": "khan_ali"
}
] | [
"queries",
"node-js",
"mongoose-odm"
] |
End of preview.
No dataset card yet
- Downloads last month
- 12