Getting Started With Neo4j GraphQL & Netlify
Building a GRANDstack Real Estate Search App: Part 1
Will Lyon starts off his journey of building a Zillow clone in GRANDstack.
Links And Resources#
welcome folks to the 00:03 neo4j twitch channel my name's will 00:07 um on my streams i'll typically be 00:11 going into depth about using graphql in 00:13 grand stack 00:14 with neo4j which is exactly what we're 00:17 going to do today 00:20 so after the last session 00:23 i tweeted out a poll to kind of ask what 00:26 folks would be interested 00:28 in seeing i thought it would be fun to 00:30 build a grand stack app 00:32 from scratch and 00:35 based on the results of the poll so the 00:37 options were movie recommendations 00:39 a zillow clone if you're not familiar 00:41 with zillow it's like a real estate 00:43 search application so searching for 00:45 houses or apartments for sale or rent a 00:48 social network or 00:50 something else so the results are pretty 00:52 close social network was 00:53 was the winner here i thought about this 00:56 a bit more 00:57 and i think what we can do is build 00:59 something that uses 01:00 all three of these elements so we're 01:03 gonna build a real estate 01:05 search app with some social features 01:08 and also some recommendations so instead 01:10 of movie recommendations we're going to 01:12 show 01:13 things like house recommendations that 01:16 kind of thing 01:18 cool so if you're not familiar 01:23 with zillow or real estate search apps 01:26 in general let's take a look and and 01:29 start there 01:32 so let's say i'm moving and i'm 01:34 searching for a house 01:37 maybe in san mateo 01:41 california 01:44 so zillow allows me to search by city 01:47 and then i get 01:48 a map view with search results pinned on 01:51 a map 01:52 i can click on a result 01:56 to view details about the listing 01:59 right so i can scroll through pictures 02:04 i can read some more information about 02:06 the listing um 02:08 i can see you know how many bedrooms 02:11 does it have 02:12 this kind of thing 02:16 i also see an estimate of how much the 02:20 house is worth so not just the asking 02:21 price but also 02:23 based on market conditions based on the 02:26 tax assessment of this 02:28 what do we actually think this not only 02:30 this house but also 02:32 san mateo in general and the specific 02:36 neighborhood what's like the average 02:38 home price and that changes through time 02:42 then i also see information about has 02:44 this house 02:45 previously been listed for sale if we 02:48 have that information available what was 02:49 the purchase 02:50 price things like the property tax 02:53 assessment and so on 02:54 so there's a lot of detailed information 02:56 that we have about 02:58 this property and this listing we can 03:01 also 03:03 in our map view here 03:06 let's remove the boundary so not just 03:10 searching by city but i can 03:11 uh sort of scroll around on the map and 03:14 as i scroll 03:15 on the map i'm seeing seeing new 03:17 listings come up 03:20 so our app will need to 03:22 [Music] 03:24 handle these sort of search results 03:26 moving around on a map so we'll need 03:28 some sort of 03:29 geo components in our application we'll 03:32 also need to be able to filter by 03:34 things like okay i want at least i don't 03:36 know four bedrooms at least two 03:38 bathrooms 03:39 whatever my my requirements are so some 03:42 sort of filtering 03:44 and on the map view 03:48 what's interesting is i can view not 03:51 only 03:51 these houses that are listed for sale as 03:55 annotations on the map but if i zoom in 03:58 here 04:03 i quickly see that i have information 04:05 about not just 04:07 properties that are for sale but i also 04:10 have information about 04:11 basically every parcel that's out there 04:14 for this view of the map and showing me 04:16 if it's not 04:17 for sale it's not showing me an asking 04:19 price it's showing me 04:20 uh like the zestimate the estimate of 04:23 how much this house is worth right 04:26 and i show i see sort of whatever 04:29 information i have about property taxes 04:31 and things like that in the case that 04:33 the house is not for sale 04:37 and i can also see the outlines of the 04:40 lot 04:41 so i have not just a single point 04:44 of information for the house but i 04:47 actually have 04:48 the geometry boundaries for the polygon 04:51 that makes up the lot 04:53 so things we all need to keep in mind as 04:56 we're thinking about how we're going to 04:57 build our 04:59 our real estate search application 05:01 another thing we can do on the map 05:04 that's interesting is let's say for 05:06 example 05:07 we want to 05:11 live within walking distance to the the 05:14 neo4j office i want to be able to walk 05:16 to work every day so near the offices 05:19 somewhere around here downtown 05:21 san mateo let's say 05:24 i can now draw the area that i want to 05:27 search 05:28 so i want to 05:31 be on this side of the highway and i 05:34 don't want to go 05:35 too high up here because i know there's 05:36 a bunch of foothills here and 05:39 maybe walking back home from work every 05:41 day i don't want to go 05:42 uphill real far but i know over here 05:44 it's kind of flat so 05:46 the area that i'm interested in is kind 05:48 of like this like i can go a little bit 05:51 up the hills maybe but then i want to 05:53 stay within 05:54 sort of the boundaries 05:58 of the highway here so i can draw an 06:01 area hit apply and now my search results 06:03 are limited 06:04 to just within that polygon right so 06:07 okay something to something to consider 06:11 and now i can see search results just 06:12 for this area 06:17 cool so we can also view 06:20 from within the listing uh the lot lines 06:24 specific for that listing as well and it 06:26 shows us 06:27 the other lot lines for my neighbors as 06:30 well 06:32 so those are sort of the the features 06:35 for a typical 06:36 user that is sort of going in and 06:38 searching for houses 06:40 you can also imagine that there's uh 06:42 functionality for 06:44 agents who need to be able to go in here 06:47 and create listings 06:50 there's also an interesting feature so 06:52 let's 06:54 let's go back to our search here 07:01 let's go back out to the main page 07:04 do a search for san mateo 07:11 so somewhere there's a way to get 07:15 information about the housing 07:18 market as well 07:23 let's see 07:26 research is that it 07:34 housing data 07:42 i don't want to download the data so 07:44 there's there's information 07:45 and sort of dashboards that we can get 07:48 that show us information 07:50 about the real estate conditions of 07:53 say like a specific city uh let's see 07:57 maybe in the visuals here yeah so these 08:00 these sorts of things so we can do 08:02 things like view 08:03 the uh the average 08:06 change in home price for a city compare 08:09 cities 08:10 uh to other cities based on housing 08:12 affordability and this thing so not just 08:14 sort of at a transactional uh searching 08:17 for house 08:18 individually level but more sort of at a 08:21 aggregate 08:21 more analytics type data 08:24 so that's that's the basic functionality 08:26 that we want to implement 08:28 in our application uh and we're gonna do 08:31 this 08:33 using grand stack so using graphql 08:36 react apollo and neo4j database 08:40 so let's take a look at the architecture 08:43 of our application 08:45 so it's going to look something like 08:46 this we have a 08:48 neo4j database that we're going to be 08:51 storing 08:52 our information in our our information 08:54 about property listings and users but 08:56 also 08:57 uh since we you saw that we had 08:59 information about 09:01 basically all of the parcels within uh 09:03 within our search area we also need to 09:04 have some information about sort of a 09:06 like a base map layer sort of to think 09:09 about 09:10 so anyway so we'll have neo4j as our 09:12 database 09:14 on top of neo4j we'll have a graphql api 09:18 that will expose this data in queries 09:22 to our react application which is going 09:24 to be the front end 09:25 so the front end will be a react 09:26 application we'll use apollo client 09:29 and apollo server to help build our 09:32 graphql api 09:33 and then for integrating with neo4j and 09:36 graphql we'll make use 09:38 of this neo4j graphql js library which 09:41 will help us generate 09:43 database queries driven mostly 09:46 from graphql 09:50 cool so um so what i want to do today is 09:53 kind of show 09:54 how to get started with just building 09:57 out uh 09:58 essentially the the skeleton of a grand 10:00 stack application 10:02 uh connect it to a neo4j instance that 10:05 that's running somewhere in the cloud 10:07 and then deploy that to netlify 10:11 and then if we have time i want to sort 10:13 of dig into 10:15 what is our graph data model going to 10:17 look like for this application 10:20 talk a bit about the graph data modeling 10:22 process 10:23 like what what informs that how do we 10:25 sort of go about 10:26 creating that graph data model and why 10:29 is that important 10:32 cool so there's 10:35 a grand stack starter project that makes 10:38 it 10:38 pretty easy to build out sort of a 10:40 template grand stack 10:42 application there's kind of a video 10:44 walkthrough 10:45 there's this create grand stack app 10:49 command line tool which basically allows 10:51 us to pull down and configure 10:53 an initial application that has all of 10:55 the the pieces that we need 10:57 let's jump to the github readme for that 11:00 project 11:03 actually let me um let me be sure to put 11:06 some links in the chat here 11:17 cool and drop in the link for 11:21 the github project so basically 11:24 the grand stack starter um 11:28 and the the create grand stack app cli 11:31 tool 11:32 pulls down the starter and then allows 11:35 us to 11:35 configure it pointed to some neo4j 11:39 instance 11:40 and then we have uh sort of a skeleton 11:43 react application with a few components 11:46 and graphql queries that query our 11:48 graphql api 11:50 so in inside the grantstack starter 11:53 project 11:54 well first of all at the root level we 11:56 have some npm scripts for 11:59 starting the both the api and the react 12:02 application 12:04 for building it there's some scripts 12:08 for different deployment targets target 12:10 builds for netlify 12:12 and for reversal for example 12:16 we have global eslint and prettier 12:18 configs for code formatting and linting 12:21 and things like that then in the api 12:23 project so this is 12:25 a node.js graphql api 12:28 application that uses apollo server and 12:32 neo4j graphql js 12:33 to build our graphql api 12:37 it has a env file that has the 12:40 configuration for the api so things like 12:43 what neo4j database do we want to 12:45 connect to 12:47 do we want to use an encrypted 12:49 connection where 12:51 do we want to serve the graphql api 12:54 what port do we want to listen on these 12:56 kind of things and 12:57 these are all set as environment 12:59 variables that get read from this file 13:01 or get set during our our build process 13:06 then in web react we have a react 13:07 application this 13:09 starts from create react app 13:12 starter it pulls in a few things 13:16 so the material ui component library for 13:18 styling 13:20 things like react router so we can do do 13:22 routing 13:24 and then it has some example components 13:26 that use 13:27 apollo client and react hooks to fetch 13:30 some data 13:31 but basically this is sort of a simple 13:34 business reviews application that gives 13:36 us a dashboard view and then some basic 13:38 search functionality there's also an 13:41 angular 13:42 implementation of the ui we won't be 13:44 using that 13:45 today but that is available there as 13:48 well and then we have some options for 13:49 uh for different deployment okay so 13:52 that's kind of what's uh what's 13:53 available 13:54 now the first thing i'm going to do is 13:56 click this provision neo4j button 13:58 so the first thing i need is a neo4j 14:00 instance uh 14:01 let's start there with our database so 14:03 we'll go ahead and 14:04 launch that that's going to take us to 14:07 neo4j sandbox 14:09 and prompt us to provision 14:12 a blank neo4j sandbox so neo4j sandbox 14:16 is this free service that lets us spin 14:19 up 14:20 uh neo4j instances that neo4j hosts 14:23 somewhere in the cloud but are still 14:25 private to us 14:27 i wasn't prompted to log in but if you 14:29 haven't logged in 14:30 you'll be prompted to do an 14:32 authorization flow and then 14:35 when the database spins up it'll be 14:37 private to you so this is a really nice 14:40 nice thing to have for development and 14:44 experimenting playing around so we chose 14:47 the the blank sandbox there's also 14:49 a bunch of other sort of pre-built data 14:52 sets that we can spin up that have a 14:54 more 14:54 guided experience with embedded queries 14:58 and things like that for various use 14:59 cases but 15:00 today we want to start with a blank 15:02 database 15:03 i can open up neo4j browser and run some 15:07 cipher queries here just to to verify 15:10 that this database 15:11 is indeed empty so if we click the 15:15 database slide out i see i don't have 15:17 any 15:18 node or relationship types here i can do 15:20 a 15:22 let's just do a match in 15:26 return count in so this should tell us 15:30 how many nodes we have in the database 15:32 zero database is empty 15:34 cool so then the next thing 15:38 is we want to run this npx 15:41 create command 15:46 so i'll go ahead and do that 15:49 uh we're just in this is a good 15:51 direction to do it so npx 15:55 create grand stack app and then i need 15:58 to 15:58 [Music] 15:59 choose the name something to call it 16:04 let's call this how about 16:07 willow grandstad so it's not zillow it's 16:11 willow okay so 16:14 this is going to pull down the latest 16:17 release of 16:19 the create grand stack app 16:23 command line tool which will then pull 16:25 down the latest 16:26 release of the grand stack starter 16:28 project that we were looking at 16:30 so we don't need to install this create 16:32 grand stack 16:33 app tool instead if we use 16:36 npx it'll always fetch us the latest 16:38 release which is nice 16:42 cool so this is going to pull down the 16:45 latest 16:46 version of the grand stack starter it's 16:49 then 16:49 going to install some dependencies 16:52 and then it will prompt us for some 16:54 configuration let me make this a little 16:58 bigger so that's easier to see 17:05 okay so now um now we're 17:08 installing our dependencies so the first 17:10 dependency it installs is 17:12 the grand stack cli we 17:15 we won't be using this today but we'll 17:17 use this later on 17:19 later on in the series as we get to some 17:22 more 17:22 advanced functionality that we need to 17:24 add but the grand stack cli 17:26 is useful for things like inferring 17:30 type definitions from an existing 17:31 database 17:33 and things like that 17:37 then we're installing dependencies for 17:39 our 17:40 api project and for the 17:43 react project as well 18:05 and i should mention you can use this 18:07 create grand stack app 18:10 tool with either npm with npx or also 18:13 with yarn 18:14 yarn has a different convention that 18:17 it's it would be yarn 18:18 create grandstack dash app 18:22 but it's it's designed to work with with 18:26 either 18:30 okay so once uh once we're done 18:32 installing our dependencies here the 18:34 next step 18:35 is going to be to configure 18:38 our grand snack starter project to 18:40 connect to the neo4j instance 18:43 that we created in neo4j sandbox 18:47 so it's telling me that it's going to 18:50 ask for some 18:51 configuration options and it's going to 18:53 write the results 18:54 to this willow grand stack slash api 18:58 dot env file so if we need to change 19:00 these later on 19:01 that will be available there cool so the 19:05 first thing i'm going to do is jump back 19:06 to sandbox 19:09 and connection details copy the bolt url 19:13 so that's the connection string 19:17 that driver will use to connect to the 19:18 database do we want to use 19:20 a encrypted connection no 19:24 we're not set up for an encrypted 19:25 connection with neo4j sandbox 19:27 uh the neo4j user if we check is going 19:30 to be neo4j 19:32 and i'll go ahead and copy the password 19:38 cool so it says we can 19:42 cd into willow 19:45 grand stack and do npm run start 19:50 and this command will start both our 19:53 graphql api 19:56 and the web react project so it says our 20:00 graphql server is ready 20:01 and looks like the react web server 20:04 is still working so let's take a look at 20:08 our graphql api so localhost 20:11 4001 graphql 20:17 okay so we have graphql playgrounds 20:20 which is kind of like a query workbench 20:24 and documentation viewer 20:28 since graphql apis have this great 20:30 feature of introspection 20:32 we can view all of the 20:36 types and the data available in our 20:38 graphql api so we can see 20:41 for our queries which these are the 20:43 entry points 20:44 to our api we have things like uh 20:47 users search for users and we see users 20:51 have recommended businesses 20:55 as well as some other properties we can 20:57 see that users 20:59 review businesses so 21:02 this is the schema that comes with the 21:05 grand stack starter project 21:07 that is sort of a business reviews 21:11 uh data model so we have information 21:13 about users the businesses they've 21:14 reviewed 21:16 and that kind of thing 21:19 okay so let's run a query let's just 21:21 search for 21:22 users so what all the users and then 21:28 fetch 21:30 name property for many users that we 21:32 find 21:39 and this should be connecting to our 21:42 neo4j sandbox 21:43 instance 21:50 and we got an error 21:56 failure to connect a server 22:01 okay that's fine so i have to figure out 22:04 uh our error message there 22:09 oh i put in the wrong uh connection 22:12 string so i put in localhost 22:14 instead of pasting in our 22:18 connection string for sandbox 22:22 so i don't have a local database running 22:23 so that's why we got the air that's okay 22:24 we'll fix that in 22:25 a minute and what i can see here now 22:28 being logged by the api 22:31 are some generated cipher queries 22:34 so here's the first one that ran which 22:37 is this match user 22:39 return user name and 22:42 what's going on there so in graphql 22:45 playground 22:46 we ran this query user 22:50 name and this now by 22:54 neo4j graphql js in our graphql api is 22:57 being 22:58 transpiled into a cipher query 23:02 and then sent to neo4j uh 23:06 and in this case this is the generated 23:09 cipher query 23:10 so we don't have to write any resolvers 23:13 in our graphql api typically that that's 23:15 what you would do in a graphql 23:16 implementation is you would define 23:19 your type definitions and then define 23:22 some resolver functions that specify 23:24 how to actually go fetch the data either 23:27 from a database or from 23:29 some other api some other system but 23:32 because we're using earphone ljs we have 23:33 that 23:34 sort of cipher generation logic which is 23:37 really nice 23:39 and we can see some more cypher queries 23:40 being logged as 23:42 our react application spun up which just 23:45 gives us some errors because 23:47 i put in the wrong neo4j 23:50 connection string so let's fix that 23:55 so i'm going to 23:57 [Music] 23:59 stop our api server 24:03 and web server let's open this up 24:07 in a text editor i like to use vs code 24:11 but we can really do this with 24:16 with any editor 24:21 let's see right there 24:41 yes code is complaining about something 24:46 let's try that again 24:50 there we go 24:58 okay so here we see here's sort of our 25:00 our local version 25:01 of the grand stack starter project um 25:04 very similar 25:06 layout to what we what we saw in the 25:09 github readme 25:10 um although so we'll notice that we have 25:13 the api project 25:16 and the web react project 25:22 what we want to do right away though is 25:25 within api we have the env 25:28 file and so this this is what was 25:31 configured by 25:32 the cli it also did some other things 25:35 like for example 25:36 uh removed the web angular since we 25:40 didn't specify we wanted that version by 25:41 default we just get 25:43 react so let's fix this if i go back to 25:48 sandbox and copy 25:51 the bolt url and replace 25:56 that in here let's just get a 26:00 terminal up here and now 26:03 do npm start 26:08 try this again uh so here's our graphql 26:12 api 26:14 now we should be connected to our 26:17 sandbox instance 26:18 so now when i run user name i get back 26:22 make this a bit bigger easier to read 26:25 i get back just an empty uh user array 26:28 why because there's no data in the 26:30 database it runs that 26:32 cipher query which we can see that's 26:34 logged here it runs 26:36 this cipher query uh returns nothing 26:38 because there's no data in the database 26:40 um so one thing we can do 26:44 and this was this is explained here in 26:47 our output is that we can 26:48 see the database with our sample data 26:52 with our business reviews application so 26:55 if i go 26:56 into api i can do npm 26:59 run cdb and what this is going to do 27:03 is execute some graphql mutations 27:06 against our running 27:07 graphql api to create a bunch of data 27:11 you can see in the background the cipher 27:14 queries that it's running 27:15 for creating our sample data in the 27:18 database 27:19 so now if we run this 27:24 we see we get back uh some usernames and 27:27 if we go back 27:27 to our database in 27:31 uk browser we see we have 36 nodes 27:35 let's make that a bit bigger do call db 27:39 schema visualization 27:42 and we can see yep here's the data model 27:45 so we have users ready to review that 27:47 review of business 27:48 that are in the category okay cool and 27:51 now if we look at our react application 27:54 for our react application yep now we 27:56 have 27:58 our dashboard loaded so here's a 28:00 component that's showing us 28:03 the distribution of ratings we've found 28:04 a total of four users 28:06 we can go in and and filter for users by 28:10 name 28:11 we can do ordering 28:15 in our results 28:18 cool so that is a good starting point 28:23 for our real estate search 28:26 app what we want is 28:29 now to figure out how we can make some 28:33 changes 28:34 to start converting this this project 28:37 into uh into our zillow clone 28:43 before we sort of dive in to talking 28:46 about data modeling 28:47 and and what our data model is going to 28:49 look like 28:50 uh let's see how we can get 28:54 uh just our starter version deployed 28:57 somewhere just so we have that 28:59 posted out there uh 29:03 cool ebay says curious to know how i can 29:05 achieve authorization mutation 29:07 for neo4j yeah yeah totally that's um 29:10 a common question that comes up and 29:13 that's definitely something 29:15 we're going to need to tackle uh 29:18 as we build out this application right 29:20 so one 29:21 one feature in in a real estate search 29:25 app that we didn't really talk about 29:27 is you know i i have the ability to sign 29:30 in 29:30 and then once i've signed in i can maybe 29:33 save search results so that the next 29:35 time i can 29:37 view my private list of search results 29:39 right so 29:41 we'll definitely need to figure out how 29:43 to do 29:44 authorization how to protect some data 29:47 so that it's private just to us 29:48 that that sort of thing so yeah totally 29:51 uh totally a good 29:52 a good point to bring up we won't get to 29:54 that today this is going to be 29:57 sort of a longer series as we build out 29:59 this application 30:01 but definitely stay tuned we'll cover 30:03 that in detail 30:04 in the meantime there's 30:09 if we search the grand stacks docs here 30:13 for authorization there's a bit um 30:16 you've probably seen this already um 30:18 in the docs about some options for doing 30:21 authorization with with grandstack 30:26 there's also i think in the 30:30 blog so blog.granstak.io 30:36 we have kind of a more tutorial style 30:39 example 30:44 authorization so what we do 30:48 for authorization i guess i'd say first 30:50 of all graphql itself 30:51 is not very opinionated about 30:54 authorization so there's no sort of 30:57 built-in 30:58 authorization functionality in graphql 31:00 it's sort of a 31:02 there are many different ways to do that 31:04 and 31:06 and leave that sort of up to the person 31:08 implementing the graphql 31:09 api to choose the method of uh 31:13 of authorization that they want but in 31:16 grand stack and the neo4j graphql js we 31:18 have some 31:20 features that we expose to make it a bit 31:22 easier to implement 31:24 authorization and authentication in 31:27 your application so let me paste in this 31:31 blog post 31:33 into the chat as well 31:37 so what we do in for authorization in 31:40 nifty graphical js 31:41 let's find an example in the blog post 31:45 here this is talking 31:46 about sort of different options as we 31:47 expose authorization specific 31:50 schema directives so we add 31:54 some schema directives that you can then 31:56 use to annotate your types 31:59 so a schema directive is sort of an 32:02 annotation 32:02 in your type definitions that indicates 32:04 that there should be 32:06 some sort of custom logic that happens 32:09 on the server so we can say for example 32:12 add 32:12 the is authenticated 32:16 schema directive to the user type and 32:19 now this means 32:21 in order to query the user type 32:24 in our graphql api a request needs to be 32:27 authenticated 32:29 and similarly we can then 32:32 say okay someone needs to have a certain 32:35 role 32:37 if you are an admin maybe you can see 32:41 a business's address if you're not then 32:44 you can't see it 32:45 and then for things like mutations so 32:49 when we're talking about not just who 32:50 can see information but who can be able 32:52 to 32:53 create update or delete certain types of 32:56 information 32:57 we have a scope directive so in this 32:59 case 33:00 we're restricting the create business 33:03 mutation 33:04 to just users with the user 33:08 right scope 33:11 and we work with 33:15 any jwt 33:18 provider to make this happen so so jwt 33:22 is a 33:22 json web token that basically allows us 33:25 to 33:26 cryptographically embed uh some of these 33:29 claims 33:30 is what they're called so things like 33:31 roles and scopes so basically when we're 33:33 setting up 33:35 our graphql api we just pass in 33:38 and specify some environment variables 33:40 that are specific 33:41 to the jwt key that we have 33:45 and so then the graphql server when it 33:48 gets a request 33:49 can check in the header to see if 33:51 there's a jwt token 33:54 try to decode it using that secret 33:55 that's specified and we know 33:57 what claims are in there that have been 33:59 verified 34:00 by our service so this this example uses 34:04 auth0 34:05 there's also there's a link to the code 34:08 for the 34:10 demo as well which i'll drop in drop in 34:12 the chat as well 34:15 cool so hopefully that hopefully that is 34:17 helpful to give you some pointers 34:19 um for things to look at but we will 34:22 definitely maybe 34:24 maybe in in the next uh couple of 34:27 sessions after this one we'll have to 34:28 tackle 34:29 uh this problem but yeah definitely 34:31 definitely something 34:32 to be aware of 34:36 cool okay so 34:40 so we've now correctly configured 34:44 our graphql api to connect to our 34:46 sandbox instance 34:48 we're able to run queries uh in 34:51 graphql playground against it 34:54 we're able to run our react application 34:58 that hits our graphql api and gives us 35:00 this this sort of dashboard view 35:03 so cool that's a good that's a good 35:05 place to be at this point what we said 35:06 we wanted to do next 35:07 was to deploy 35:11 our graphql api and the react 35:14 application 35:15 somewhere so that we can maybe you know 35:18 share this with i don't know 35:20 tell my mom that i built a web 35:22 application or something 35:24 share it with other people on the team 35:26 that sort of thing 35:29 so uh to do that we're going to use 35:32 netlify 35:33 if we go back into 35:37 the grand stack starter readme 35:41 we can see these deploy buttons which 35:44 which are actually pretty cool they 35:45 allow for sort of one-click deployment 35:48 uh of the project to these services just 35:50 from uh from the readme 35:52 which is fun but we already have sort of 35:54 the version 35:55 locally that we're working on if we look 35:58 in the deployment section we have some 35:59 options 36:00 for netlify and reversal 36:04 as well let's let's do netlify 36:07 today so with netlify what we want to do 36:10 is connect a netlify build to 36:14 a github repository and then anytime 36:17 we push changes to that github repo 36:21 netlify will pick that up and we'll then 36:23 build and 36:24 deploy our application and this is 36:27 really really useful because we can do 36:29 things like 36:30 open a pull request that has potential 36:33 changes 36:34 and netify will build like a preview 36:37 build of that that won't replace the 36:39 production deployments but then we can 36:41 sort of test it 36:42 live and then if that looks good then we 36:44 can merge it in so 36:45 that's a really really helpful workflow 36:49 to have in this iterative development so 36:53 first though we need uh to push our 36:55 project up 36:56 to github so let's go 37:01 github.com new so we'll create a new 37:05 repo um 37:08 call it willow grandstack 37:12 real estate 37:15 real estate 37:18 search application using 37:24 grand stack cool make this public 37:29 create the repo i'm going to copy this 37:32 line 37:34 and jump back here 37:38 in our terminal i'll create 37:44 get repo 37:47 let's add everything let's do a git 37:50 status to see 37:52 you can see exactly what we're adding 37:54 here 37:55 so we're adding um some of our eslint 37:59 and printer config we're adding 38:00 our api project 38:04 we're adding the web react project which 38:07 has a handful of components 38:09 yep looks good 38:12 uh initial commit 38:16 and we'll connect this to github 38:19 and push it up 38:34 we go back to github now we see that we 38:37 have 38:38 deployed our project so one thing to 38:40 point out is that 38:41 we did not check in our env 38:45 file that had our database credentials 38:47 um that's something we 38:49 want to make sure we don't do pushing 38:51 database credentials 38:53 to public repo is 38:57 not good um so that's if we look in the 39:01 in the get ignore file here let's make 39:04 that terminal a bit smaller 39:09 uh somewhere in here yeah we're saying 39:11 don't include any dot emd files 39:13 so that means what we need to do is when 39:15 we 39:16 connect this to nuttlify we need to tell 39:18 netlify the environment variables to use 39:21 with 39:21 the connection credentials to our neo4j 39:23 database 39:25 okay so we've got our github repo up the 39:27 next thing we want to do is go to 39:31 nutlify and another reason i like nutify 39:35 is it has a great free tier so we can do 39:38 these deployments without having 39:42 to put in a credit card and start paying 39:44 which is really nice for just 39:45 development and testing of course once 39:47 we want 39:48 more features or or if we have more 39:50 traffic then 39:51 we may have to pay for that but for the 39:54 free tier it's pretty 39:56 good so we'll sign in we'll do new site 39:59 from git connect to github 40:05 and search for repo named willow 40:11 call this willow grand stack 40:17 cool so included in the starter um are 40:20 the 40:21 the build configurations the build 40:22 commands uh specific to 40:25 netlify if we look 40:28 in the repo we'll see a netflight.tamil 40:32 file so this 40:33 this just uh sort of has information 40:35 about how to build this specifically on 40:36 netlify we'll talk about 40:38 how that works in a second but before we 40:41 can hit the deploy button we need to add 40:45 three environment variables so it's 40:47 exactly the same as what we had 40:50 in our emd file we had what a neo4j 40:53 user a neo4j 40:59 password pass word 41:03 and a neo4j uri 41:07 so the nifty user is neo4j um the 41:10 password let's jump back 41:11 to sandbox 41:15 get our password and the uri 41:18 is this bolt connection string 41:24 cool so we'll mash the deploy button and 41:27 so now 41:28 for the initial deploy this is going out 41:30 to 41:31 github pulling in our project and 41:34 running that build command now there's 41:38 there's two different projects in here 41:40 right and we need to deploy both of them 41:41 one is we need to deploy the 41:45 graphql api 41:48 and we also need to deploy our react 41:52 application so react is 41:55 fairly straightforward that gets built 41:57 and then served out over 41:58 netlify's cdn as static content 42:02 but our graphql api that's a 42:05 node.js web server that's serving 42:09 a graphql endpoint that's connecting to 42:12 our database 42:13 how how is that going to work in netlify 42:15 well with netlify 42:16 we have these things called functions 42:21 these are aws lambdas 42:24 that we can use in netlify in our build 42:28 so netlify you can think of the notify 42:31 function feature you can think of as 42:32 having 42:32 a nice uh developer experience wrapped 42:35 on type 42:36 on top of adobe's lambdas that takes 42:38 care of a lot of the the sort of 42:40 configuration and deployment if we were 42:41 using 42:42 aws directly so the way this works in 42:45 the starter if we jump back 42:47 to our code look in api well first let's 42:51 talk about 42:52 the case when we're running this locally 42:55 since 42:55 i guess we haven't really looked at that 42:57 code yet 42:59 so when i run npm 43:03 run start or just npm start 43:07 when i run that it's starting 43:12 um our graphql api 43:15 by running essentially a node.js 43:19 web server locally essentially from this 43:23 index.js file so let's take a look at 43:27 this 43:30 so first of all we're pulling in some 43:33 graphql type definitions 43:35 these are specified in this 43:38 schema.graphql file and 43:42 we'll talk a bit more about the the 43:45 details of this but essentially what 43:46 we're doing here 43:48 you saw in our api we were searching for 43:51 users 43:52 and then grabbing the name field but 43:55 with graphql 43:56 we define the types in 43:59 our api all the fields that are 44:01 available how they're connected 44:03 that's the where the graph part in 44:05 graphql comes in what is the data graph 44:07 that we're talking about 44:10 so this comes pre-configured with 44:13 uh type definitions for the business 44:16 reviews applications we were defining 44:18 all of the types these map to node 44:21 labels 44:22 in neo4j so when we say type user 44:26 in the propertygraph model we're mapping 44:29 this 44:29 to nodes with the label user 44:34 and then these fields get mapped to 44:36 properties 44:37 on the node we'll talk about how we can 44:39 do relationships and properties on 44:41 relationships and 44:42 and so on later the other thing we can 44:45 do here 44:46 we were talking about graphql schema 44:48 directives earlier when 44:50 today asked that question about how do 44:53 we do 44:54 authorization so we have a couple of 44:57 other 44:57 schema directives that we can use 45:00 in grand stack actually let's jump back 45:03 to the docs 45:07 since i just want to mention schema 45:16 directives 45:18 this in the chat too 45:21 so this table shows all of the schema 45:23 directives that are made available 45:26 in your photography jess and in grand 45:29 stack 45:29 we talked about these ones the 45:32 authorization 45:33 schema directives we also use 45:37 the at relation schema directive and 45:40 this at cipher 45:42 directive these two are used pretty 45:45 commonly 45:46 as we're building out our schema so 45:48 these are annotations that are going to 45:49 go in our type definitions 45:52 in this case the at relation schema 45:55 directive is going to help us 45:57 to map the type definitions of graphql 46:01 to the property graph model in neo4j 46:04 the cipher schema directive is going to 46:06 allow us to define some custom logic 46:08 essentially mapping a cipher query 46:11 to a field in our graphql schema 46:16 and then neo4j ignore we use to 46:19 exclude fields that we are 46:22 resolving either manually ourselves with 46:26 a custom resolver or that we have some 46:29 other 46:30 process that we're fetching that data 46:32 but basically just saying we want 46:34 to exclude these from the generated 46:35 cipher queries 46:37 and then the additional labels directed 46:40 this is one 46:41 that we use maybe in the case of say 46:44 multi-tenancy or we want to add an 46:46 additional label 46:47 that specifies the tenant that we want 46:51 to have this data specific to so anyway 46:53 that's an overview of the schema 46:54 directives 46:55 that are available and here's an example 47:00 of the at cipher directive so in this 47:03 case we have a field 47:04 average stars on user this is a 47:08 float a decimal and 47:12 it is defined by this cipher statement 47:15 so 47:16 in this case we want to find this user 47:20 all of the reviews that they wrote and 47:23 then 47:25 every review has a stars 47:29 field which we can see here stars 47:32 on our review it's a float so we'll 47:35 calculate the average of that 47:37 and return that and then that value now 47:40 when we're 47:41 resolving this user object in our 47:43 graphql query 47:44 will be the result of running that 47:46 cipher query 47:49 cool so that is an example of 47:52 a cipher directive we also 47:55 can use the relation directive 48:01 which adds some additional information 48:03 about relationships 48:05 so in the property graph model that we 48:08 use 48:09 in neo4j every relationship has 48:12 both a type and a 48:16 direction so here we say that the 48:20 connection from a review to 48:23 a business is in 48:26 neo4j stored as a relationship 48:30 with of type reviews that is going out 48:34 so it's going from the review 48:35 to the business so an outgoing 48:37 relationship 48:40 so that's how we define that sort of 48:42 logic 48:43 and then we also now have some 48:48 custom top level 48:51 mutation and queries in this case so we 48:54 can use 48:55 the cipher directive to define custom 48:56 logic for fields on types we can also 48:59 use it 49:00 for top level root queries on either the 49:03 mutation 49:04 or the query type so here for example 49:08 where we're doing a user account so we 49:11 may 49:12 just want to query for how many users do 49:15 we have in 49:16 the system and in that case that is the 49:19 query 49:20 that is executed by this component so 49:23 total users 49:24 so this component is running 49:28 a graphql query that just says give me 49:31 user account and then that is then 49:34 executing this cipher query to just give 49:36 me the account of users 49:40 cool so that um that's sort of a quick 49:43 crash 49:44 course in graphql type definitions and 49:46 and how we 49:47 define those um 49:50 with neo4j graphical js 49:54 we can also generate these from an 49:56 existing database 49:58 which we'll take a look at maybe in the 50:01 next 50:01 session as well 50:05 okay so what i wanted 50:09 to talk about is how 50:13 how our graphql server runs right 50:17 so jumping back to index.js okay so 50:20 that's our type definitions 50:22 that are imported we then import apollo 50:26 server this is a really nice library 50:29 from apollo that makes it easy to spin 50:30 up 50:31 a local graphql server 50:34 in this case using express we pull in 50:37 the neo4j driver so this allows us to 50:40 execute cipher queries against neo4j 50:43 and then from the neo4j graphql js 50:46 library we pull in 50:48 make augmented schema 50:51 and make augmented schema this is 50:55 the function we'll pass our graphql type 50:57 definitions 50:58 to actually right here so we're passing 51:02 in 51:02 our type definitions and notice 51:05 in our type definitions that we didn't 51:08 specify 51:09 all of the entry points we didn't 51:11 specify all of the 51:13 query or mutation fields for like 51:15 creating a user 51:17 for searching for a user and so on 51:21 we just defined the basic types and the 51:24 only 51:24 query and mutation fields that we added 51:26 were ones where we had some sort of 51:28 custom logic going on so what make 51:31 augmented schema is going to do is going 51:33 to take our graphql type definitions 51:35 and add an entry point for each one of 51:38 the types 51:39 so if we have a user type it's going to 51:41 add a user field on the query type so i 51:44 can search for users 51:45 it's going to add filtering so i can 51:48 search for 51:49 users by name or within 51:53 some i don't know maybe a range of the 51:56 date of their creation 51:57 something like that and 52:00 it's also going to generate our 52:02 resolvers so i said earlier typically 52:05 we have to implement resolvers in 52:07 graphql that specify 52:09 these are functions that specify how to 52:12 fetch data 52:13 from either the database or or the other 52:16 api that we're wrapping 52:18 but with make augmented schema we 52:20 generate those resolvers automatically 52:22 so we don't have to 52:23 specify how to fetch that data from the 52:25 database 52:27 and that generated api is configurable 52:30 we saw 52:30 some of the ways we could do that with 52:32 the 52:34 schema directives but we can also pass 52:35 in a configuration object in this case 52:37 we're excluding 52:39 this rating count payload from 52:42 from the augmentation process but 52:44 there's some other configuration options 52:45 as well which 52:46 i might talk about okay then we create 52:49 a neo4j driver instance reading in the 52:53 connection credentials from our 52:56 environment variables 52:57 with fallback to sort of local defaults 53:01 we then go through an initialized 53:04 database 53:05 step so oftentimes we want to make sure 53:08 that we have 53:09 maybe database constraints or indexes 53:12 that are online before we start our our 53:15 api 53:16 so we may have constraints that say 53:20 user ids need to be unique 53:23 that sort of thing so in this case we're 53:26 using 53:27 this apoc schema assert procedure which 53:31 is going to make sure that we have 53:33 constraints on the things that should be 53:36 unique so user id business id review id 53:39 and category name make sure those are 53:41 constraints are online 53:43 before starting the api 53:46 and then we pass in our schema that we 53:49 generated 53:49 into apollo server and we make sure that 53:51 the driver 53:53 is injected into each request context so 53:56 this means that our generated resolvers 53:58 will always have access to this driver 54:01 instance to connect to neo4j 54:04 and we start up our server so that's 54:07 how it works when we run this locally 54:10 when we do 54:11 npm start when 54:14 we were talking about how to deploy this 54:16 to netlify as a netlify function 54:18 on aws lambda it works a little bit 54:21 differently in that case 54:23 what we actually end up deploying 54:27 in functions graphql 54:32 graphql.js what we end up deploying 54:35 is a lambda function version of 54:39 that graphql server that we're looking 54:41 at it looks very similar 54:43 except we pull in apollo server lambda 54:47 we still pull in ufj driver make a 54:49 committed schema 54:51 uh and we export a handler for 54:54 that adress lambda and this is 54:57 configured in 54:58 uh in our netlife build settings that we 55:02 have so we don't really need to 55:03 to think about that cool so if we go 55:06 back now 55:07 to netlify 55:12 we can look at the 55:15 build output so this just says that yep 55:18 everything built we deployed 55:20 your react app we deployed your function 55:23 and we get a url 55:27 specific to our deployment 55:31 so here's our react app it's hitting 55:34 an aws lambda graphql api that was 55:37 deployed as a netify function 55:39 that is querying our neo4j sandbox 55:41 instance to fetch some data and give us 55:44 some information about our business 55:46 reviews 55:48 cool so we now have our starter project 55:52 pushed up to github um we have it 55:56 deployed to netlify now we're ready to 55:59 sort of start making some changes to 56:03 our application to start converting this 56:06 to a real estate search application 56:08 instead of 56:10 one business reviews 56:13 and i want to talk maybe a little bit 56:15 about the 56:18 data modeling process that we go through 56:21 with property graph data and 56:24 typically where i like to start is to 56:27 start with 56:28 the requirements of the application so 56:31 if we go back to looking at zillow 56:35 and sort of approach this from the point 56:38 of view 56:39 of the users so as a user i want to 56:43 you know search by city to view 56:47 search results so go through sort of 56:49 defining these 56:51 these requirements for application and 56:54 once we have those then i like to 56:57 identify okay what are the 56:58 what are the entities that we've 57:00 identified things like user 57:02 a listing those become the nodes 57:06 in our model and you know what what 57:09 attributes do i have 57:10 those become the properties how are they 57:12 connected 57:13 those become the relationships and what 57:16 i like to do 57:16 is use the arrows 57:20 tool which i'll drop a link 57:24 to in the chat um arrows is 57:28 a property graph modeling tool it's it's 57:31 fairly simple 57:32 um but it's really useful and powerful 57:37 so it allows us to create these graph 57:39 models 57:40 so a user might have 57:45 an id and a name 57:50 we have listings 57:54 that maybe have like a listing id 57:59 maybe it has an asking price 58:07 a listing is also going 58:11 to be maybe in 58:15 a city 58:27 right and so maybe maybe this user 58:30 has saved this search 58:34 uh 58:40 into their saved search results 58:44 at a certain time maybe something like 58:46 that 58:48 so i like to go through this process of 58:50 identifying okay what are the business 58:52 requirements 58:53 uh of the application what are the 58:54 entities the relationships 58:57 the properties and and start to draw out 59:00 this graph model and then from there i 59:03 like to say 59:04 okay well can i can i traverse this 59:06 graph 59:07 um can i write a cipher query that 59:10 defines a graph traversal 59:12 that's going to answer my question um 59:15 and if i can do that then i sort of 59:18 check that requirement off 59:19 and move on to the next one and if i 59:21 can't then i go back to the graph model 59:23 and i say okay well how do i need to 59:25 adjust this data model to be able to 59:28 search 59:29 for listings filtered by 59:32 bedrooms or something like that right so 59:35 in that case 59:36 well i don't have that i don't have 59:38 bedrooms 59:39 as uh listed here as 59:51 as a property cool 59:54 so um we're almost uh 59:57 running out of time here so we won't go 60:01 into too much more detail going through 60:03 our requirements and and the data model 60:05 maybe we'll save that 60:07 for next time to flesh out 60:10 so maybe maybe a bit of a thought 60:12 exercise or sort of homework 60:15 if you want to sort of go through 60:18 this process of thinking out what are 60:20 the requirements of the application 60:23 what are the entities how are they 60:24 connected can you create 60:26 a graph data model as in a diagram 60:30 that sort of addresses all those 60:32 requirements and can you sort of 60:34 traverse through that graph to answer 60:36 your questions 60:37 so we'll pick up there next time 60:41 i'll talk about maybe just one more use 60:43 case 60:44 uh while i'm one more requirement of our 60:47 application while i'm thinking of it 60:49 to give you an example of the kind of 60:52 the kind of things that we'll need to 60:55 think about so 60:58 we said our first requirement was sort 61:00 of uh i want to be able to search 61:02 for listings in a city so i can write 61:06 that 61:06 cipher query it's going to look 61:09 something 61:10 like this match 61:14 listing 61:19 in city 61:33 all right it's going to look something 61:33 like this and 61:36 i'm going to start here at the city and 61:38 find my listings 61:40 now uh one other 61:43 feature one other requirements is i want 61:47 to 61:48 for a property view all of the previous 61:51 listings right 61:52 so like for example 61:56 let's pick a house at random here 62:00 so we can view the information about 62:04 the listing and the house but if we 62:06 scroll down 62:07 we can see uh price history 62:11 and in this case it looks like we don't 62:13 have information about previous listings 62:15 but 62:16 uh typically we'll see you know this 62:19 house was 62:20 listed five years ago and it sold for 62:23 this price and so on 62:25 so now if i go back to my model the 62:27 requirement is 62:29 for a property i want to 62:33 view all of the previous listings and 62:36 in this case i can't really answer that 62:39 question 62:40 by traversing this bit here because 62:44 okay i might have like multiple listing 62:46 nodes but how am i sort of 62:48 associating them with a property 62:51 so really what i want in that case is i 62:54 want 62:56 maybe a property or maybe a parcel node 63:01 and in this case this is going to have 63:04 the information 63:05 about the property itself so this is 63:07 going to be 63:08 things like um 63:11 you know what is its address uh 63:15 what is its location that's going to be 63:17 a latitude and longitude but remember we 63:19 also saw 63:20 things like the uh the boundaries of the 63:23 parcel so the the polygon 63:25 so in that case it might be a list of 63:30 points that's defining 63:32 a polygon so things like that we want to 63:34 extract out 63:35 into a property node 63:39 and then one oh and let's give it a name 63:42 here let's say listing is i don't know 63:45 listing 63:48 listing of the property perhaps we can 63:51 refine this a little bit 63:52 but now a property 63:56 might have multiple 63:59 listings right so we'll have like a 64:02 created at date time 64:06 let's add that here as well 64:16 and make our name 64:19 and directions consistent 64:26 so now uh when i say okay i want to 64:30 for a given property i want to view 64:33 all of the listing history well now my 64:37 traversal is going to be a bit different 64:38 i'm going to 64:40 go to this node and say okay i know that 64:43 this property 64:44 is in san mateo because i can traverse 64:46 from the city 64:48 to actually let's 64:52 now move city 64:56 to be connected to the property since 64:59 it's 65:00 now it's the property node that's going 65:01 to have the information about 65:03 the parcel and sort of what city it's in 65:05 right 65:06 um so now when i'm searching 65:09 for listings 65:12 now this is a bit different because i 65:14 have to think of 65:16 okay let's find the city find the 65:18 property 65:19 do any of those properties have an 65:20 active listing so 65:22 maybe i have something like 65:26 an active flag 65:29 so i start at san mateo find all the 65:32 properties that are connected 65:33 to an active listening and then i want 65:36 to make sure that i show 65:38 these in order so in this table here of 65:40 listing results i'm going to 65:42 have to order these by by created date 65:47 okay so that's that's kind of an example 65:48 of that iterative process 65:50 right so so graft data modeling is 65:54 in my opinion really an iterative 65:55 process where 65:57 it's driven not just by the data but 66:00 also by the requirements of your 66:02 application what kind of 66:04 questions are you going to be asking of 66:06 your data so this is the sort of 66:08 iterative process we go through 66:10 once we we have all of our the 66:12 requirements or application once we have 66:14 sort of envisioned 66:15 traversals for how we can answer those 66:18 questions 66:19 then we sort of know that we've arrived 66:22 at the final data model and we can start 66:24 implementing our application 66:26 um and of course it's not always that 66:28 perfect right sometimes we may need to 66:29 tweak our data model 66:31 as we start to implement our application 66:33 but 66:34 anyway we're a bit over over on the hour 66:36 so we'll 66:37 stop here this is 66:41 a regular session that we'll be doing 66:45 every thursdays at 2 p.m pacific uh 66:48 building out our real estate search 66:51 application 66:52 uh until we have something reasonable 66:55 um so hopefully the the first of many uh 66:57 so hopefully this uh 66:59 this was useful um and yeah we will be 67:02 recording these and um 67:03 be sure to subscribe to the neo4j twitch 67:07 channel 67:08 if which i'll drop a link to 67:11 as well if you don't already and 67:14 i think we'll also be posting these to 67:17 youtube 67:22 cool so that's um that's all that i have 67:25 for today i'll also be sure to post the 67:27 code up 67:29 and so we can follow along as well so 67:32 we'll pick up next time next thursday 67:35 at 2 p.m pacific we'll pick up with 67:38 looking at the requirements of our 67:41 application 67:42 and creating our graph data model and 67:45 making sure that 67:46 our data model can address the 67:48 requirements 67:49 then we'll tweak our graphql api 67:54 to be more specific to that data model 67:56 and 67:57 instead of our business reviews 67:58 application and then we need to start 68:01 looking 68:01 at uh some data so one of the things 68:05 that 68:05 that i find really interesting about 68:07 about zillow and these types of 68:09 applications is 68:09 that they they have information not just 68:12 about 68:13 the listings but also about basically 68:16 every other 68:16 property including sort of 68:20 uh things like tax information and 68:23 the parcel bounds and so on so what i 68:25 want to do is see if we can find 68:27 some actual data sources of in the us 68:31 this is typically available as like 68:33 county level data 68:35 about the parcel so what i want to do 68:39 after we've defined our data model is 68:40 then find some of this 68:43 this geographic data and start importing 68:45 it into neo4j to talk about how we can 68:47 work with geospatial data 68:50 both in in neo4j and in 68:53 graphql and not just point geometries 68:55 but also 68:56 how do we work with these polygon 68:58 geometries 68:59 in neo4j and in graphql 69:03 cool so that is where we will pick up 69:07 next time so thanks for joining uh and 69:10 we'll see you next thursday 69:18 cheers 69:21 you
Subscribe To Will's Newsletter
Want to know when the next blog post or video is published? Subscribe now!