Shooting victim to be airlifted older man stabbed in face in Provo

first_img Recommended for you Facebook Twitter Google+LinkedInPinterestWhatsAppProvidenciales, 02 Jun 2015 – An update on that shooting reveals the man shot outside his home on Aquamarine Street in Millennium Heights subdivision in Wheeland will be airlifted today for further medical care, he was last listed in critical condition. The man, described as a Bahamian is believed to have been followed by his attackers who fired several shots at him in a drive by attempt, likely on his life. The man managed to duck into his home, but not before being struck and it is unclear how many times at this point. It was said the police could be looking for a silver colored vehicle. We are also receiving a report of a stabbing; a 65 year old Jamaican man, long-time resident of these islands had emergency surgery as he too was listed in critical condition after being attacked as he was walking; our report is that he was stabbed about the body, even in the face. We have no news on where the incident happened or how many assailants there were at this time, but we can tell you that he was robbed. The man will also be transferred out of country for further treatment. Facebook Twitter Google+LinkedInPinterestWhatsApp Police involved shooting in Lewis Yard, details scant Manhunt on for Kim’s Crescent killer, Bahamas police report two other shootings TCI Police Investigate shooting in Providenciales; two escape injury Related Items:millenium heights, shooting, wheelandlast_img read more

Canines in costume carouse

first_imgDogs dressed as frontiersmen, crustaceans and various food items raced, rolled and sang their way across the Ross Off-Leash Dog Recreation Area during DOGPAW’s annual Dogtoberfest event Saturday in Vancouver.Dogtoberfest included a costume contest, along with various doggie Olympics events such as best tail-wag, best kisser, best singer and best tricks. Besides providing dogs and their owners a play date, the event served as a valuable tool in recruiting new members for DOGPAW, chairwoman Kathleen Hansen said.You can learn more about DOGPAW by visiting clarkDOGPAW.org or calling 888-899-0025.The Dog Owners Group for Park Access in Washington, an all-volunteer nonprofit based out of Vancouver, runs five parks in Clark County, each of which is between six and eight acres in size. The Hazel Dell location where Dogtoberfest occurred Saturday is at Northeast Ross Street and 15th Avenue.The organization needs $5,000 in annual donations per park to pay for maintenance and upkeep, Hansen said. Memberships are $15 for singles and $25 for families. The organization has around 700 members, she added.Organizers cancelled one of Dogtoberfest’s biggest moneymakers — the lure course for sight dogs such as greyhounds — when the motor that propels mechanized rabbits and squirrels failed, Hansen said.last_img read more

The iPad app making life easier for people in public housing

first_img For the disability community, tech is the great equalizer The app helping the homeless take back control Men won’t talk about mental health and it’s literally killing them Related stories 0 CNET may get a commission from retail offers. See It Review • Apple iPad 2018 review: The iPad for everyone Post a comment $249 With every FACS officer responsible for between 350 and 450 properties, the department was previously only visiting 30 percent of its public housing tenants in a given year. After the app was launched across the state in April 2018, the department conducted one third of its yearly visits — more than 20,000 interactions — in just 60 days.Former FACS client services officer Roger Mclean helped develop the app and knows the problem faced by front-line public housing workers too well. For each public housing visit he used to conduct, he says he would spend upwards of three hours printing out forms, rifling through case files and doing dry paperwork. For a person who got into the job to help people, the bulk of his time was spent on data entry. “It was horrible and very time consuming,” he says. “Now, we’re not rushing.”With only an iPad in tow, case workers can now spend time actually speaking to tenants in their homes, where issues are easier to identify and difficult conversations can be conducted in privacy. For elderly residents and people living with a disability the focus on in-home interactions is game-changing.  “Before, we spent 100 percent of our time on 10 percent of our clients,” says Lance Carden, director of customer service and business improvement at FACS.But for Carden, the biggest change has been a shift from putting out fires to actually engaging with people in the community who need it most. “We miss out on early intervention if we’re not visiting everybody. And we’re missing that social and human element.”Tech Enabled: CNET chronicles tech’s role in providing new kinds of accessibility.The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter. $329 $249 Share your voice See Itcenter_img See it Apple Amazon Mentioned Above Apple iPad 2018 (space gray, 32GB) Culture See It Best Buy Family and Community Services officer Roger Mclean talks through the Ivy app with Kate McDonnell. Ian Knighton/CNET For millions around the world, public housing offers the promise of a much-needed roof overhead.But the reality of public housing can be grim, and problems that start small can often become bureaucratic nightmares.That might be a case of waiting weeks to get a broken door fixed or having to file repeated complaints about rowdy neighbours. But issues can be left to fester if councils ignore public housing tenants. And in some cases, as the world saw with the massive fire at London’s Grenfell Tower housing complex in 2017, that can have tragic consequences. While governments can be notoriously slow to adapt, one community housing provider is using tech to catch potential problems before they become big issues, making life easier for some of the most vulnerable people in society.  That solution is the Ivy app.Created by the NSW Department of Family and Community Services (FACS) in Australia, this iOS app was developed to cut out the endless paperwork case workers and community housing residents need to complete to get basic things done. It lets case workers fill property condition reports and take photos directly on an iPad, while also accessing family records, past incidents or safety issues and recent rent and water bills. Residents can complete forms and make payments on the spot, without having to visit a FACS office or wait an age on the phone to get connected to a call centre. And it’s all done through an iPad, which holds records of all the properties and families a case worker deals with, letting them map out appointments and access any information with a tap of the screen.  facs-apple-ivy-app-4Enlarge ImageThe Ivy app lets public housing residents pay bills, update records and get immediate referrals for help around their home.  Ian Knighton/CNET A simple tech update might seem like a no-brainer. But for Kate McDonnell, who lives in public housing with her five children in inner-city Sydney, the Ivy app has been a huge help. “Before, paperwork got lost … things were falling by the wayside,” she says. Case workers were “overloaded” with admin, and when she did actually get home visits, it was often a new case worker each time.Now, when she has issues, she doesn’t need to wrangle her two young children to get to a FACS centre while the other kids are in school — everything is done through the iPad. And when her case worker visits her house, “I know who they are.” $249 Tags Apple iPadlast_img read more

Australia vs Afghanistan ICC 2019 World Cup Live telecast preview players to

first_imgDefending champions Australia start off their campaign to regain the world title against a relatively weak opposition in Afghanistan at Bristol on June 1. However, the Afghans have done enough in their brief ODI history to make the Aussies and every other team wary. They have a trump card in the form of Rashid Khan whose success or failure may well decide the course of the match.When and where to watchBeing the second match of the day, this contest will begin at 1:30 PM local time and 6 PM IST. It will be telecasted on Star Sports 3 and streamed online on Hotstar. PreviewA lot of people are interested in this match, but not because of what is expected to happen on the field. The curiosity relates to the kind of reaction David Warner and Steve Smith are expected to get when they make their return to international cricket, that too, in front of an English crowd.When Steve Smith scored a hundred in a warm-up game, he got booed by the people in attendance. While some English players have requested the fans to not be so uncharitable, the English crowds are known to have a mind of their own.But looking at the match, Australia should record an easy win. Their batting order was looking quite solid even before the re-emergence of Warner and Smith. Aaron Finch scored heavily in the last ODI series of his team while Khawaja has been pumping out runs with great prolificacy since the 5-match rubber in India.Now the question is: Where will Warner bat? He is bound to be in top-3. Smith would come at no. 4 and by his century in the practice game has proven that he hasn’t lost his value in ODI cricket. Shaun Marsh and Glenn Maxwell might be next with Alex Carey finishing off the top-7. However, if Australia want a seamer as their fifth bowler, Marcus Stoinis may get in.The bowling would consist of Mitchell Starc, the hero of the last World Cup with his deadly speed and yorkers, and Pat Cummins, one of the very best in the World. Whether Aussies pick two spinners or one depends upon the nature of the wicket. If there is a bit of dryness, both Adam Zampa and Nathan Lyon would play. In case only one is needed, it will be tough to choose between Zampa and Lyon as both have done well. The former gives away runs but picks up wickets while Lyon is great for controlling the game through accurate bowling. The comeback of David Warner will be something keenly watchedDaniel Kalisz/Getty ImagesFor the third seamer’s spot, there are three options: Nathan Coulter Nile, Jason Behrendorff and Kane Richardson. None of the three have set the stage on fire but could be good as a partner with Cummins and Starc.On the Afghanistan side, the hopes would rest on Rashid Khan. But he won’t be alone among spinners. Mohammad Nabi and Mujeeb Ur Rahman have been good in recent times. A collective effort would be required from all of them. The pace bowling department of Afghanistan would be followed closely to see how Hamid Hassan, making a comeback into the team does. Among their batsmen, hopes are high from Hazratullah Zazai, a dashing, aggressive batsmen who has played some great knocks in T20 cricket. The likes of Hashmatullah Shahidi and Asghar Afghan are also expected to rear up the Afghan innings. But they will have their task cut out against the Cummins and co.Essentially, we are looking at an easy win for the 5-time world champions. But who knows, there may be a surprise in store. The hope for Afghans lies in Rashid’s bowling. If he can bamboozle the batsmen in this match, they will have an opprtunity, if not, it’s a foregone conclusion. Players to watchDavid Warner: Away from international cricket for more than a year, Warner doesn’t seem to have lost his great ability. After performing well in the IPL, he might want to make a big statement in the World Cup.Hamid Hassan: When Afghanistan first appeared on the international scene in the 2010 World T20, Hassan was the most impressive performer. He could bowl genuinely fast, reaching speeds up to 140/kph+. It would be interesting to see if he has maintained that pace.Predicted XIsAustralia: Aaron Finch (C), David Warner, Usman Khawaja, Steve Smith, Glenn Maxwell, Marcus Stoinis, Alex Carey (WK), Nathan Lyon, Pat Cummins, Nathan Coulter-Nile, Mitchell StarcAfghanistan: Mohammad Shahzad (WK), Hazratullah Zazai, Hashmatullah Shahidi, Asghar Afghan, Najibullah Zadran, Rahmat Shah, Mohammad Nabi, Gulbadin Naib (C), Rashid Khan, Mujeeb Ur Rehman, Hamid Hassanlast_img read more

Minister raps students for blocking roads over attendance rule

first_imgKolkata: Taking strong exception to road blockades staged by college students over attendance mandate, West Bengal Education Minister Partha Chatterjee has asserted the government will be forced to take action if the agitators cause inconvenience to public for “personal benefit”. A large number of students from two institutes – Shibnath Shastri College and Heramba Chandra College – took to streets on Thursday and Friday to protest against the Calcutta University (CU) mandate of 60 per cent minimum attendance for appearing in examinations. Also Read – Rain batters Kolkata, cripples normal life The agitators of Heramba Chandra College blocked thoroughfares in Golpark area for two consecutive days, demanding immediate relaxation of norms. The protest was also backed by students of Gurudas College and Jaipuria College, who demonstrated outside their campuses. The education minister said the government will not succumb to the pressure tactics of the students. “Under no circumstances will the administration tolerate such pressure tactics. Many people were inconvenienced as they (students) blocked the roads to get their demands fulfilled. This is unacceptable,” Chatterjee told reporters on Saturday. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killed Earlier this year, the CU authorities had asked all affiliated colleges to ensure that only those with a minimum 60 per cent attendance would be allowed to sit for semester exams. A list of non-eligible students was recently published at Heramba Chandra College, triggering the agitation. Chatterjee pointed out that the state education department has fixed 60 per cent attendance as a criterion to sit for semester exams, even as the UGC suggested 75 per cent attendance for higher educational institutions. “Maybe the agitating students wanted to sit for exams without attending classes or studying,” he said. Heramba Chandra College Principal Nabanita Chakraborty, who met Chatterjee Saturday morning, said the authorities are planning to reduce the cut-off figure for attendance from 60 per cent to 55 per cent. Trinamool Congress Chhatra Parishad (TMCP), the students’ wing of the ruling party, also appealed to the education minister to find a “way out of the deadlock” to restore normalcy in the institutes. TMCP president Trinankur Bhattacharya, who met the minister at his residence, told reporters that the unit has sought a solution that would be acceptable to all sides. A TMCP source said the union urged Chatterjee to consider relaxation in attendance norm for this year.last_img read more

Are You Living in a Digital Bubble This Flowchart Will Tell You

first_imgJune 11, 2016 Register Now » Opinions expressed by Entrepreneur contributors are their own. The internet is an example — for better or worse — of the freedom of expression. Yet, people find ways to insulate themselves on social media sites and elsewhere on the web.People might unintentionally find themselves in a “filter bubble” — that is, only reading and / or engaging with content that confirms their views and opinions. Consider, for instance, what sites you head to for news (besides Entrepreneur, of course), if you use anti-tracking software and  the types of posts and comments you put online. All of these aspects of your digital life might be signs of being in a filter bubble.The digital echo chamber can lead people to stop expanding their horizons and learning new information. Therefore, Hyper Island, a company that focuses on educational programs and courses as well as innovation consulting for companies, put together an infographic to help raise awareness of these filter bubbles. Check out the flowchart below to find out the level of insulation you experience online.Click to EnlargeRelated: Is Workplace Culture Overrated? (Infographic) Free Webinar | Sept. 9: The Entrepreneur’s Playbook for Going Global 1 min read Growing a business sometimes requires thinking outside the box.last_img read more

Generative Adversarial Networks Generate images using Keras GAN Tutorial

first_imgYou might have worked with the popular MNIST dataset before – but in this article, we will be generating new MNIST-like images with a Keras GAN. It can take a very long time to train a GAN; however, this problem is small enough to run on most laptops in a few hours, which makes it a great example. The following excerpt is taken from the book Deep Learning Quick Reference, authored by Mike Bernico. The network architecture that we will be using here has been found by, and optimized by, many folks, including the authors of the DCGAN paper and people like Erik Linder-Norén, who’s excellent collection of GAN implementations called Keras GAN served as the basis of the code we used here. Loading the MNIST dataset The MNIST dataset consists of 60,000 hand-drawn numbers, 0 to 9. Keras provides us with a built-in loader that splits it into 50,000 training images and 10,000 test images. We will use the following code to load the dataset: from keras.datasets import mnistdef load_data(): (X_train, _), (_, _) = mnist.load_data() X_train = (X_train.astype(np.float32) – 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) return X_train As you probably noticed, We’re not returning any of the labels or the testing dataset. We’re only going to use the training dataset. The labels aren’t needed because the only labels we will be using are 0 for fake and 1 for real. These are real images, so they will all be assigned a label of 1 at the discriminator. Building the generator The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code: def build_generator(noise_shape=(100,)): input = Input(noise_shape) x = Dense(128 * 7 * 7, activation=”relu”)(input) x = Reshape((7, 7, 128))(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(128, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(64, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(1, kernel_size=3, padding=”same”)(x) out = Activation(“tanh”)(x) model = Model(input, out) print(“– Generator — “) model.summary() return model We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output. Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that’s very deep but less tall and wide. Here we will do the opposite. We’ll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, we’ll be left with a 28 x 28 tensor. Since we need a grayscale image, we can use a convolutional layer with a single unit to get a 28 x 28 x 1 output. This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours, you will get the hang of it! Building the discriminator The discriminator is really, for the most part, the same as any other CNN. Of course, there are a few new things that we should talk about. We will use the following code to build the discriminator: def build_discriminator(img_shape): input = Input(img_shape) x =Conv2D(32, kernel_size=3, strides=2, padding=”same”)(input) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Conv2D(64, kernel_size=3, strides=2, padding=”same”)(x) x = ZeroPadding2D(padding=((0, 1), (0, 1)))(x) x = (LeakyReLU(alpha=0.2))(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(128, kernel_size=3, strides=2, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(256, kernel_size=3, strides=1, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Flatten()(x) out = Dense(1, activation=’sigmoid’)(x) model = Model(input, out)print(“– Discriminator — “)model.summary()return model First, you might notice the oddly shaped zeroPadding2D() layer. After the second convolution, our tensor has gone from 28 x 28 x 3 to 7 x 7 x 64. This layer just gets us back into an even number, adding zeros on one side of both the rows and columns so that our tensor is now 8 x 8 x 64. More unusual is the use of both batch normalization and dropout. Typically, these two layers are not used together; however, in the case of GANs, they do seem to benefit the network. Building the stacked model Now that we’ve assembled both the generator and the discriminator, we need to assemble a third model that is the stack of both models together that we can use for training the generator given the discriminator loss. To do that we can just create a new model, this time using the previous models as layers in the new model, as shown in the following code: discriminator = build_discriminator(img_shape=(28, 28, 1))generator = build_generator()z = Input(shape=(100,))img = generator(z)discriminator.trainable = Falsereal = discriminator(img)combined = Model(z, real) Notice that we’re setting the discriminator’s training attribute to False before building the model. This means that for this model we will not be updating the weights of the discriminator during backpropagation. We will freeze these weights and only move the generator weights with the stack. The discriminator will be trained separately. Now that all the models are built, they need to be compiled, as shown in the following code: gen_optimizer = Adam(lr=0.0002, beta_1=0.5)disc_optimizer = Adam(lr=0.0002, beta_1=0.5)discriminator.compile(loss=’binary_crossentropy’,optimizer=disc_optimizer,metrics=[‘accuracy’])generator.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer)combined.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer) If you’ll notice, we’re creating two custom Adam optimizers. This is because many times we will want to change the learning rate for only the discriminator or generator, slowing one or the other down so that we end up with a stable GAN where neither is overpowering the other. You’ll also notice that we’re using beta_1 = 0.5. This is a recommendation from the original DCGAN paper that we’ve carried forward and also had success with. A learning rate of 0.0002 is a good place to start as well, and was found in the original DCGAN paper. The training loop We have previously had the luxury of calling .fit() on our model and letting Keras handle the painful process of breaking the data apart into mini batches and training for us. Unfortunately, because we need to perform the separate updates for the discriminator and the stacked model together for a single batch we’re going to have to do things the old-fashioned way, with a few loops. This is how things used to be done all the time, so while it’s perhaps a little more work, it does admittedly leave me feeling nostalgic. The following code illustrates the training technique: num_examples = X_train.shape[0]num_batches = int(num_examples / float(batch_size))half_batch = int(batch_size / 2) for epoch in range(epochs + 1):for batch in range(num_batches):# noise images for the batchnoise = np.random.normal(0, 1, (half_batch, 100))fake_images = generator.predict(noise)fake_labels = np.zeros((half_batch, 1))# real images for batch idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1))# Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)noise = np.random.normal(0, 1, (batch_size, 100))# Train the generatorg_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))# Plot the progressprint(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” %(epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) There is a lot going on here, to be sure. As before, let’s break it down block by block. First, let’s see the code to generate noise vectors: noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) This code is generating a matrix of noise vectors called z) and sending it to the generator. It’s getting a set of generated images back, which we’re calling fake images. We will use these to train the discriminator, so the labels we want to use are 0s, indicating that these are in fact generated images. Note that the shape here is half_batch x 28 x 28 x 1. The half_batch is exactly what you think it is. We’re creating half a batch of generated images because the other half of the batch will be real data, which we will assemble next. To get our real images, we will generate a random set of indices across X_train and use that slice of X_train as our real images, as shown in the following code: idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1)) Yes, we are sampling with replacement in this case. It does work out but it’s probably not the best way to implement minibatch training. It is, however, probably the easiest and most common. Since we are using these images to train the discriminator, and because they are real images, we will assign them 1s as labels, rather than 0s. Now that we have our discriminator training set assembled, we will update the discriminator. Also, note that we aren’t using the soft labels. That’s because we want to keep things as easy as they can be to understand. Luckily the network doesn’t require them in this case. We will use the following code to train the discriminator: # Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) Notice that here we’re using the discriminator’s train_on_batch() method. The train_on_batch() method does exactly one round of forward and backward propagation. Every time we call it, it updates the model once from the model’s previous state. Also, notice that we’re making the update for the real images and fake images separately. This is advice that is given on the GAN hack Git we had previously referenced in the Generator architecture section. Especially in the early stages of training, when real images and fake images are from radically different distributions, batch normalization will cause problems with training if we were to put both sets of data in the same update. Now that the discriminator has been updated, it’s time to update the generator. This is done indirectly by updating the combined stack, as shown in the following code: noise = np.random.normal(0, 1, (batch_size, 100))g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) To update the combined model, we create a new noise matrix, and this time it will be as large as the entire batch. We will use that as an input to the stack, which will cause the generator to generate an image and the discriminator to evaluate that image. Finally, we will use the label of 1 because we want to backpropagate the error between a real image and the generated image. Lastly, the training loop reports the discriminator and generator loss at the epoch/batch and then, every 50 batches, of every epoch we will use save_imgs to generate example images and save them to disk, as shown in the following code: print(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” % (epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) The save_imgs function uses the generator to create images as we go, so we can see the fruits of our labor. We will use the following code to define save_imgs: def save_imgs(generator, epoch, batch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, 100)) gen_imgs = generator.predict(noise) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r):for j in range(c): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap=’gray’) axs[i, j].axis(‘off’) cnt += 1 fig.savefig(“images/mnist_%d_%d.png” % (epoch, batch)) plt.close() It uses only the generator by creating a noise matrix and retrieving an image matrix in return. Then, using matplotlib.pyplot, it saves those images to disk in a 5 x 5 grid. Performing model evaluation Good is somewhat subjective when you’re building a deep neural network to create images.  Let’s take a look at a few examples of the training process, so you can see for yourself how the GAN begins to learn to generate MNIST. Here’s the network at the very first batch of the very first epoch. Clearly, the generator doesn’t really know anything about generating MNIST at this point; it’s just noise, as shown in the following image: But just 50 batches in, something is happening, as you can see from the following image: And after 200 batches of epoch 0 we can almost see numbers, as you can see from the following image: And here’s our generator after one full epoch. These generated numbers look pretty good, and we can see how the discriminator might be fooled by them. At this point, we could probably continue to improve a little bit, but it looks like our GAN has worked as the computer is generating some pretty convincing MNIST digits, as shown in the following image: Thus, we see the power of GANs in action when it comes to image generation using the Keras library. If you found the above article to be useful, make sure you check out our book Deep Learning Quick Reference, for more such interesting coverage of popular deep learning concepts and their practical implementation. Read Next Keras 2.2.0 releases! 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keraslast_img read more